ATTiny85 Internal Clock and One-Wire - avr

Is the internal clock on the ATTiny85 sufficiently accurate for one-wire timing?
Per https://learn.sparkfun.com/tutorials/ws2812-breakout-hookup-guide one-wire timing seems to need accuracy around the 0.05us range, so a 10% clock error on the AVR at 8MHZ would cause 0.0125us sized timing differences (assuming the 10% error figure is accurate, and that it's 10% error on frequency, not +/- 10% variance on each pulse).
Not a ton of margin - but is it good enough?

First of all, WS2812 LEDs are not the 1-wire.
The control protocol of WS2812 is described in the datasheet
The short answer is yes, ATTiny85, also the whole AVR family have enough clock accuracy to control the WS2812 chain. But routine should be written at assembler, also no interrupts should be allowed, to guarantee match the timing requests. When doing the programming well, 8MHz speed of the internal oscillator may be enough to output the different data to two WS2812 chains simultaneously.
So, when running 8MHz ±10%, the one clock cycle would be 112...138 ns.
The datasheet requires (with 150ns tolerance):
When transmitting "one": high level to be 550...850ns; - 6 clock cycles (672...828) matches this range (also 5 clock cycles (560...690ns) matches)
following low level - 450...750ns; - 5 cycles (560...690ns)
When transmitting "zero": high level 200...500ns; - 3 cycles (336...414ns)
following low level 650...950ns; - 6 cycles (672...828).
So, as you can see, considering tolerance ±10% of the clock's source, you can find the integer number of cycles which will guarantee match to the required intervals.
Speaking from the experience, it still be working if the low level, which follows the pulse, will be extended for a couple hundreds of nanoseconds.

There are known issues using internal oscilator with UART - should be timed to 2% accuracy while the internal oscilator can be up to 10% off with factory setting. While it can be calibrated(AVR has register OSCCAL for that purpose), its frequency is influenced by temperature.
It is worth the try, but might not to be reliable with temperature changes or fluctuating operating voltage.
References: ATmega's internal oscillator - how bad is it, Timing accuracy on tiny2313, Tuning internal oscilator

The timing requirements of NeoPixels (WS2812B) are wide enough that the only really critical part is the minimum width of a 1 bit. The ATtiny85 at 16Mhz is plenty fast to drive a string of them from a GPIO pin. At 8Mhz, it may not work (I haven't tried yet). I just released a small Arduino sketch which allows you to control NeoPixel strings of any length on a ATtiny85 without using any RAM.
https://github.com/bitbank2/NeoPixel
For devices with hardware SPI (e.g. ATMega328p), it's better to use SPI to shift out the bits (also included in my code).

Related

How to improve ESP32 sleep time accuracy

When sending an esp32 device to sleep for approximately a day, it wakes up 3.34% earlier than expected. This amounts to approx 48 minutes.
Is this the expected accuracy of this device or can it be tuned to be more accurate?
The concrete device is an ESP32-CAM and it is running at 80MHz at approx 25°C room temperature.
Code to send the device to sleep:
unsigned int time_to_sleep_sec = 86400;
esp_sleep_enable_timer_wakeup(1ULL * unsigned int time_to_sleep_sec * 1000 * 1000);
esp_deep_sleep_start();
Instead of 86400 seconds, the device woke up after approx 83465 seconds.
the default RTC clock source is only 5% accurate, so it's within spec. Check the info:
The RTC timer has the following clock sources:
Internal 150kHz RC oscillator (default): Features lowest deep sleep
current consumption and no dependence on any external components.
However, as frequency stability is affected by temperature
fluctuations, time may drift in both Deep and Light sleep modes.
External 32kHz crystal: Requires a 32kHz crystal to be connected to
the 32K_XP and 32K_XN pins. Provides better frequency stability at the
expense of slightly higher (by 1 uA) Deep sleep current consumption.
External 32kHz oscillator at 32K_XN pin: Allows using 32kHz clock
generated by an external circuit. The external clock signal must be
connected to the 32K_XN pin. The amplitude should be less than 1.2 V
for sine wave signal and less than 1 V for square wave signal. Common
mode voltage should be in the range of 0.1 < Vcm < 0.5xVamp, where
Vamp is signal amplitude. Additionally, a 1 nF capacitor must be
placed between the 32K_XP pin and ground. In this case, the 32K_XP pin
cannot be used as a GPIO pin.
Internal 8.5MHz oscillator, divided by
256 (~33kHz): Provides better frequency stability than the internal
150kHz RC oscillator at the expense of higher (by 5 uA) deep sleep
current consumption. It also does not require external components.
So if you aren't able to add components, the 5uA 'expense' appears to be reasonable. Otherwise the best solution is to add an external 32kHz crystal.
Or you wakeup the device during sleep to correct the time with help of the internet like SNTP.

PWM transistor heating - Rapberry

I have a raspberry and an auxiliary PCB with transistors for driving some LED strips.
The strips datasheets says 12V, 13.3W/m, i'll use 3 strips in parallel, 1.8m each, so 13.3*1.8*3 = 71,82W, with 12 V, almost 6A.
I'm using an 8A transistor, E13007-2.
In the project i have 5 channels of different LEDs: RGB and 2 types of white.
R, G, B, W1 and W2 are directly connected in py pins.
LED strips are connected with 12V and in CN3, CN4 for GND (by the transistor).
Transistor schematic.
I know that that's a lot of current passing through the transistors, but, is there a way to reduce the heating? I think it's getting 70-100°C. I already had a problem with one raspberry, and i think it's getting dangerous for the application. I have some large traces in the PCB, that's not the problem.
Some thoughts:
1 - Resistor driving the base of the transistor. Maybe it won't reduce heating, but i think it's advisable for short circuit protection, how can i calculate this?
2 - The PWM has a frequency of 100Hz, is there any difference if i reduce this frequency?
The BJT transistor you're using has current gain hFE of roughly 20. This means that the collector current is roughly 20 times the base current, or the base current needs to be 1/20 of the collector current, i.e. 6A/20=300mA.
Rasperry PI for sure can't supply 300mA current from the IO pins, so you're operating the transistor in linear region, which causes it to dissipate a lot of heat.
Change your transistors to MOSFETs with low enough threshold voltage (like 2.0V to have enough conduction at 3.3V IO voltage) to keep it simple.
Using a N-Channel MOSFET will run much cooler if you get enough gate voltage to force to completely enhance. Since this is not a high volume item why not simply use a MOSFET gate driver chip. Then you can use a low RDS on device. Another device is the siemons BTS660 (S50085B BTS50085B TO-220). it is a high side driver that you will need to drive with an open collector or drain device. It will switch 5A at room temperature with no heat sink.It is rated for much more current and is available in a To220 type package. It is obsolete but available as is the replacement. MOSFETs are voltage controlled while transistors are current controlled.

PAPI: what does Clock reference cycles mean?

i am using PAPI liberary to tune and profile my application.
I want to know what (PAPI_REF_CYC : Reference clock cycles ) means actually?
Thanks in advance,
Some modern CPUs, including the Intel's and AMD's ones, are throttled.
This means that their clocks are not fixed but vary depending on the power management active - even if the CPU's brand frequency is X Ghz, more often than not, it is not running at that frequency.
For a couple of real example technology see the Intel Turbo boost technology/AMD Turbo core and Intel Enhanced Speedstep technology/AMD Quiet'n'Cool technology.
Since the core clock can slow down or speed up, comparing two different measures makes no sense.
Having a snippet A to run in 100 core clocks and a snippet B in 200 core clocks means that B is slower in general (it takes double the work), but not necessarily that B took more time than A since the units are different.
That's where the reference clock comes into play - it is uniform.
If snippet A runs in 100 ref clocks and snippet B runs in 200 ref clocks then B really took more time than A.
Converting ref clock ticks into time (e.g. seconds) is not that easy, each processor uses a difference reference frequency, even among processor with the same brand name.

IR emitter and PWM output

I have been using FRDM_KL46Z development board to do some IR communication experiment. Right now, I got two PWM outputs with same setting (50% duty cycle, 38 kHz) had different voltage levels. When both were idle, one was 1.56V, but another was 3.30V. When the outputs were used to power the same IR emitter, the voltages were changed to 1.13V and 2.29V.
And why couldn't I use one PWM output to power two IR emitters at the same time? When I tried to do this, it seemed that the frequency was changed, so two IR receivers could not work.
I am not an expert in freescale, but how are you controlling your pwm? I'm guessing each pwm comes from a separate timer, maybe they are set up differently. Like one is in 16 bit mode (the 3.3V) and the other in 32 (1.56v) in that case even if they have the same limit in the counter ((2^17 - 1) / 2) would be 50% duty cycle of a 16 bit timer. But in a 32 bit, that same value would only be 25% duty so, one output would be ~1/2 the voltage of the other. SO I suggest checking the timer setup.
The reason the voltage changed is because the IR emmiters were loading the circuit. In an ideal situation this wouldn't happen, but if a source is giving too much current the voltage usually drops a bit.

AXI4 (Lite) Narrow Burst vs. Unaligned Burst Clarification/Compatibility

I'm currently writing an AXI4 master that is supposed to support AXI4 Lite (AXI4L) as well.
My AXI4 master is receiving data from a 16-bit interface. This is on a Xilinx Spartan 6 FPGA and I plan on using the EDK AXI4 Interconnect IP, which has a minimum WDATA width of 32 bits.
At first I wanted to use narrow burst, i.e. AWSIZE = x"01" (2 bytes in transfer). However, I found that Xilinx' AXI Reference Guide UG761 states "narrow bursts [are] supported but [...] not recommended." Unaligned transactions are supposed to be supported.
This had me thinking. Say I start an unaligned burst:
AWLEN = x"01" (2 beats)
AWSIZE = x"02" (4 bytes in transfer")
And do the following:
AX (32-bit word #0: send hi16)
XB (32-bit word #1: send lo16)
Where A, B are my 16 bit words that start off at an unaligned (2-byte aligned) address. X means WSTRB is deasserted for the indicated 16 bit.
Is this supported or does this fall under the category "narrow burst" even through AWSIZE = x"02" (4 bytes in transfer) as opposed to AWSIZE = x"01" (2 bytes in transfer)?
Now, if this was just for AXI4, I would probably not care as much about this use case, because AXI4 peripherals are required to use the WSTRB signals. However, the AXI Reference Guide UG761 states "[AXI4L] Slaves interface can elect to ignore WSTRB (assume all bytes valid)."
I read here that many (but not all; and there is not list?) Xilinx AXI4L peripherals do elect to ignore WSTRB.
Does this mean that I'm essentially barred from doing narrow burst ("not recommended") as well as unaligned bursts ("WSTRB can be ignored") or is there an easy way to unload some of the implementation work from my master into the interconnect, guaranteeing proper system behavior when accessing AXI4L peripherals?
Your example is not a narrow burst, and should work.
The reason narrow burst is not recommended is that it gives sub-optimal performances. Both narrow-burst and data realignement cost in area and are not recommended IMHO. However, DRE has minimal bandwidth cost, while narrow burst does. If your AXI port is 100MHz 32 bits, you have 3.2GBits maximum throughput, if you use narrow burst of 16 bits 50% of the time, than your maximum throughput is reduced to 2.4GBits (32bits X 50MHz + 16bits X 50Mhz). Also, I'm not sure AXI-Lite support narrow burst or data realignement.
Your example has 2 major flaws. First, it requires 3 data-beats to transfer 32 bits, which is worst than narrow-burst (I don't think AXI is smart enough to cancel the last burst with WSTRB to 0). Second, you can't burst more than 2 16-bits at a time, which will hang your AXI infrastructure's performances if you have a lot of data to transfer.
The best way to deal with this is concatenate the 16 bits together to form a 32 bits in your block. Then you buffer these 32 bits and burst them when you have enough. This is the AXI high performance way to do this.
However, if you receive data as 16-bits, it seems you would be better using AXI-Stream, which support 16-bits but doesn't have the notion of addresses. You can map an AXI-Stream to AXI-4 using Xilinx's IP cores. Either AXI-Datamover or AXI-DMA can do that. Both do the same (in fact, AXI-DMA includes a datamover), but AXI-DMA is controlled trough an AXI-Lite interface while Datamover is controlled through additionals AXI-Streams.
As a final note, the Xilinx cores never requires narrow-burst or DRE. If you need DRE in AXI-DMA, it's done by the AXI-DMA core and not the AXI Interconnect. Also, these cores are clear-source, so you can checkout how they operate easily.

Resources