When sending an esp32 device to sleep for approximately a day, it wakes up 3.34% earlier than expected. This amounts to approx 48 minutes.
Is this the expected accuracy of this device or can it be tuned to be more accurate?
The concrete device is an ESP32-CAM and it is running at 80MHz at approx 25°C room temperature.
Code to send the device to sleep:
unsigned int time_to_sleep_sec = 86400;
esp_sleep_enable_timer_wakeup(1ULL * unsigned int time_to_sleep_sec * 1000 * 1000);
esp_deep_sleep_start();
Instead of 86400 seconds, the device woke up after approx 83465 seconds.
the default RTC clock source is only 5% accurate, so it's within spec. Check the info:
The RTC timer has the following clock sources:
Internal 150kHz RC oscillator (default): Features lowest deep sleep
current consumption and no dependence on any external components.
However, as frequency stability is affected by temperature
fluctuations, time may drift in both Deep and Light sleep modes.
External 32kHz crystal: Requires a 32kHz crystal to be connected to
the 32K_XP and 32K_XN pins. Provides better frequency stability at the
expense of slightly higher (by 1 uA) Deep sleep current consumption.
External 32kHz oscillator at 32K_XN pin: Allows using 32kHz clock
generated by an external circuit. The external clock signal must be
connected to the 32K_XN pin. The amplitude should be less than 1.2 V
for sine wave signal and less than 1 V for square wave signal. Common
mode voltage should be in the range of 0.1 < Vcm < 0.5xVamp, where
Vamp is signal amplitude. Additionally, a 1 nF capacitor must be
placed between the 32K_XP pin and ground. In this case, the 32K_XP pin
cannot be used as a GPIO pin.
Internal 8.5MHz oscillator, divided by
256 (~33kHz): Provides better frequency stability than the internal
150kHz RC oscillator at the expense of higher (by 5 uA) deep sleep
current consumption. It also does not require external components.
So if you aren't able to add components, the 5uA 'expense' appears to be reasonable. Otherwise the best solution is to add an external 32kHz crystal.
Or you wakeup the device during sleep to correct the time with help of the internet like SNTP.
Related
I have a raspberry and an auxiliary PCB with transistors for driving some LED strips.
The strips datasheets says 12V, 13.3W/m, i'll use 3 strips in parallel, 1.8m each, so 13.3*1.8*3 = 71,82W, with 12 V, almost 6A.
I'm using an 8A transistor, E13007-2.
In the project i have 5 channels of different LEDs: RGB and 2 types of white.
R, G, B, W1 and W2 are directly connected in py pins.
LED strips are connected with 12V and in CN3, CN4 for GND (by the transistor).
Transistor schematic.
I know that that's a lot of current passing through the transistors, but, is there a way to reduce the heating? I think it's getting 70-100°C. I already had a problem with one raspberry, and i think it's getting dangerous for the application. I have some large traces in the PCB, that's not the problem.
Some thoughts:
1 - Resistor driving the base of the transistor. Maybe it won't reduce heating, but i think it's advisable for short circuit protection, how can i calculate this?
2 - The PWM has a frequency of 100Hz, is there any difference if i reduce this frequency?
The BJT transistor you're using has current gain hFE of roughly 20. This means that the collector current is roughly 20 times the base current, or the base current needs to be 1/20 of the collector current, i.e. 6A/20=300mA.
Rasperry PI for sure can't supply 300mA current from the IO pins, so you're operating the transistor in linear region, which causes it to dissipate a lot of heat.
Change your transistors to MOSFETs with low enough threshold voltage (like 2.0V to have enough conduction at 3.3V IO voltage) to keep it simple.
Using a N-Channel MOSFET will run much cooler if you get enough gate voltage to force to completely enhance. Since this is not a high volume item why not simply use a MOSFET gate driver chip. Then you can use a low RDS on device. Another device is the siemons BTS660 (S50085B BTS50085B TO-220). it is a high side driver that you will need to drive with an open collector or drain device. It will switch 5A at room temperature with no heat sink.It is rated for much more current and is available in a To220 type package. It is obsolete but available as is the replacement. MOSFETs are voltage controlled while transistors are current controlled.
Is the internal clock on the ATTiny85 sufficiently accurate for one-wire timing?
Per https://learn.sparkfun.com/tutorials/ws2812-breakout-hookup-guide one-wire timing seems to need accuracy around the 0.05us range, so a 10% clock error on the AVR at 8MHZ would cause 0.0125us sized timing differences (assuming the 10% error figure is accurate, and that it's 10% error on frequency, not +/- 10% variance on each pulse).
Not a ton of margin - but is it good enough?
First of all, WS2812 LEDs are not the 1-wire.
The control protocol of WS2812 is described in the datasheet
The short answer is yes, ATTiny85, also the whole AVR family have enough clock accuracy to control the WS2812 chain. But routine should be written at assembler, also no interrupts should be allowed, to guarantee match the timing requests. When doing the programming well, 8MHz speed of the internal oscillator may be enough to output the different data to two WS2812 chains simultaneously.
So, when running 8MHz ±10%, the one clock cycle would be 112...138 ns.
The datasheet requires (with 150ns tolerance):
When transmitting "one": high level to be 550...850ns; - 6 clock cycles (672...828) matches this range (also 5 clock cycles (560...690ns) matches)
following low level - 450...750ns; - 5 cycles (560...690ns)
When transmitting "zero": high level 200...500ns; - 3 cycles (336...414ns)
following low level 650...950ns; - 6 cycles (672...828).
So, as you can see, considering tolerance ±10% of the clock's source, you can find the integer number of cycles which will guarantee match to the required intervals.
Speaking from the experience, it still be working if the low level, which follows the pulse, will be extended for a couple hundreds of nanoseconds.
There are known issues using internal oscilator with UART - should be timed to 2% accuracy while the internal oscilator can be up to 10% off with factory setting. While it can be calibrated(AVR has register OSCCAL for that purpose), its frequency is influenced by temperature.
It is worth the try, but might not to be reliable with temperature changes or fluctuating operating voltage.
References: ATmega's internal oscillator - how bad is it, Timing accuracy on tiny2313, Tuning internal oscilator
The timing requirements of NeoPixels (WS2812B) are wide enough that the only really critical part is the minimum width of a 1 bit. The ATtiny85 at 16Mhz is plenty fast to drive a string of them from a GPIO pin. At 8Mhz, it may not work (I haven't tried yet). I just released a small Arduino sketch which allows you to control NeoPixel strings of any length on a ATtiny85 without using any RAM.
https://github.com/bitbank2/NeoPixel
For devices with hardware SPI (e.g. ATMega328p), it's better to use SPI to shift out the bits (also included in my code).
I use the PIC16F88 for my project, with XC8 compiler.
What I'm trying to achieve is to control 4 LEDs with 4 buttons, when you press a buttons it increases the duty cycle of the corresponding LED by 10%.
When you press the button on RB0 it increases the duty cycle of the LED on RB4, and so on.
Every LED is independent, therefore it can have a different duty cycle.
The problem is that the PIC i'm using only have one PWM module on either RB0 or RB3 (using CCPMX bit).
After some research I decided to implement software PWM to have four different channels, each channels would control one duty cycle, but most of the sources I found didn't provide any explanation on how to do it.
Or is there a way to mirror PWM to multiple pins ?
Thanks by advance for helping me out.
Mirroring is not an option.
PWM is relatively simple. You have to set PWM frequency (which you will not change) and PWM duty cycle (which you need to change in order to have 0-100% voltage range). You have to decide about resolution of PWM, voltage step that you need (built in PWM for example is 8-bit and has 0-255 steps).
Finally, you have to set timer to interrupt based on PWM frequency * PWM resolution. In Timer ISR routine you need to check resolution counts and PWM value of all your channels. Resolution count will have to reset when resolution value is reached (and start to count from 0 again, all outputs go HIGH here, also). When PWM value of output is reached you have to toggle (pull it LOW) corresponding pin (and reset it back to HIGH with every resolution count reset).
This is only one way of doing it, involves only one timer and should be most simple since your PIC is low with resources.
Hope it helps...
I have to read 5 different frequencies(square wave) up to 20KHz by polling 5 different pins.
Im using a single timer interrupt only,for every 1 millisecond.
Polling of the pins would be done in the ISR.
The algorithm i have thought of so far is:
1.Count number of HIGH
2.Count number of LOW
3.Check if sum of HIGH+LOW=Time period.
This algorithm seems slow and is not practical.
Is there any Filter functions that i could use to check the frequency at pin so that all i have to do would be to call that function?
Any other algorithms, for frequency detection would be good.
I am restricted to only 1 interrupt in my code(timer interrupt)
you need to take in mind what are the input signal properties
as you are limited to just single interrupt (which is pretty odd)
and the timer has only 1 KHz
the input signal must not be above 0.5 KHz
if the signal is with noise the frequency can easily rise above that limit many times
speed
you wrote that the simple counting period approach is slow
what CPU and IO power have you?
I am used to Atmel AVR32 AT32UC3 chips which are 2 generation before ARM cortex chips
and I have around 96MIPS and 2-5 MHz pin R/W frequency there without DMA or interrupts
so what exactly is slow on that approach?
I would code it with your constrains like this (it is just C++ pseudo code not using your platform):
const int n=5;
volatile int T[n],t[n],s[n],r[n]; // T last measured period,t counter,s what edge is waitng for?
int T0[n]={?,?,?,...?}; // periods to compare with
void main() // main process
{
int i;
for (i=0;i<n;i++) { T[i]=0; t[i]=0; s[i]=0; r[i]=0; }
// config pins
// config and start timer
for (;;) // inf loop
{
for (i=0;i<n;i++) // test all pins
if (r[i]>=2) // ignore not fully measured pins
{
r[i]=2;
if (abs(T[i]-T[0])>1) // compare +/- 1T of timer can use even bigger number then 1
// frequency of pin(i) is not as it should be
}
}
}
void ISR_timer() // timer interrupt
{
// test_out_pin=H
int i;
bool p;
for (i=0;i<n;i++)
{
p=get_pin_state(i); // just read pin as true/false H/L
t[i]++; // inc period counter
if (s[i]==0){ if ( p) s[i]=1; } // edge L->H
else { if (!p) s[i]=0; T[i]=t[i]; t=0; r[i]++; } // edge H->L
}
// test_out_pin=L
}
you can also scan as comparation between last pin state and actual
that would eliminate the need of s[]
something like p0=p1; p1=get_pin_state(i); if ((p1)&&(p0!=p1)) { T[i]=t[i]; t[i]=0; }
this way you can also more easily implement SW glitch filters
but I think he MCU should have HW filters too (like most MCU does)
How would I do this without your odd constraints?
I would use external interrupt
usually they can be configured to be trigger on specific edge of signal
also including the HW filtering of noise
on each interrupt take the internal CPU clock counter value
and if it is not at disposal then timer/counter state
substract with the last measured one
restore from overflow if occured
change to s or Hz if needed
this way I can scan pins on MCU with 30MHz clock reliably with frequency up to 15MHz (use this for IRC decoder)
and yes IRC can give you above 1 MHz frequencies on occasions (on edge between states)
if you want the ratio also the you can:
have 2 interrupts one for positive and second for negative edge
or use just one and reconfigure the edge after each hit (Atmel UC3L chips I used some time ago had problem with this one due to internal bugs)
[notes]
it is essential that the pins you accessing are at the same IO port
so you can read them all at once and then just decode the pins after
also the GPIO module is usually configurable so check what clock is it powered with
there are usually 2 clocks one for interfacing the GPIO module with CPU core
and the second for the GPIO itself so check both
you can also use DMA instead of external interrupt
if you can configure to read the IO port by DMA ... to memory somewhere
then you can inspect on the background process independent to IO
I have been using FRDM_KL46Z development board to do some IR communication experiment. Right now, I got two PWM outputs with same setting (50% duty cycle, 38 kHz) had different voltage levels. When both were idle, one was 1.56V, but another was 3.30V. When the outputs were used to power the same IR emitter, the voltages were changed to 1.13V and 2.29V.
And why couldn't I use one PWM output to power two IR emitters at the same time? When I tried to do this, it seemed that the frequency was changed, so two IR receivers could not work.
I am not an expert in freescale, but how are you controlling your pwm? I'm guessing each pwm comes from a separate timer, maybe they are set up differently. Like one is in 16 bit mode (the 3.3V) and the other in 32 (1.56v) in that case even if they have the same limit in the counter ((2^17 - 1) / 2) would be 50% duty cycle of a 16 bit timer. But in a 32 bit, that same value would only be 25% duty so, one output would be ~1/2 the voltage of the other. SO I suggest checking the timer setup.
The reason the voltage changed is because the IR emmiters were loading the circuit. In an ideal situation this wouldn't happen, but if a source is giving too much current the voltage usually drops a bit.