While working with Simplicity Studio and Silabs EFM8BB3 (8051 based SoC), I'm observing a very slow transfer rate with a huge pause (up-to 60 ms) in between each byte transfer, as well as (up-to 160 ms) between complete messages on i2c protocol over smbus master interface.
Why is the transfer rate so slow, and is there anything I can do to resolve it?
the problem being resolved by activating another timer responsible only for SCL delay/timeout recognition (beware, it's not so clear explained in the datasheet). on Silabs EFM8BB3 chipset the timer T2 has a role to provide i2c data stream transfer modulation. the timer T3 has a role to provide i2c SCL timeout handling. to be able to decrease pause in between bytes, T3 has to be enabled. T3 must be configured in two timers, 8-bit auto reload mode with low byte interrupt enabled. T3 low byte overflow frequency has to be set to 50000 (reload value is 215) in case of 400KHz i2c transfer rate. simplified interrupt handler (just to drop the interrupt flag) has to be implemented. T3 high byte overflow frequency could be set to the lowest available 8000 (reload value is 1). the SMBUS interface entity require Enable SMBus SCL Timeout Detection activated.
Related
I need to read temperature data with using MAX31865 SPI communication. First of all, I tried to read 4 byte data:
import machine
import ubinascii
spi = machine.SPI(1, baudrate=5000000, polarity=0, phase=0)
#baudrate controls the speed of the clock line in hertz.
#polarity controls the polarity of the clock line, i.e. if it's idle at a low or high level.
#phase controls the phase of the clock line, i.e. when data is read and written during a clock cycle
cs = machine.Pin(15, machine.Pin.OUT)
cs.off()
cs.on()
data = spi.read(4)
cs.off()
print(ubinascii.hexlify(data))
I tried many times with different codes but result is always similar b'00000000'.
I am using ESP32 WROOM.
I used this pins:
ESP32 : D12 - D14 - 3V3 - GND - D15
Max31865: SDO - CLK - VIN - GND - CS
I am new on micropython and esp32.
I don't know what should I do. Is there any suggestions , recommended tutorials or idea?
Short answer: see if you can use CircuitPython and its drivers for MAX31865.
Long answer: a bunch of stuff. I suspect you've been following the Adafruit tutorial for MAX31855, but its SPI interface is very different from the MAX31865.
Your SPI connection is missing the SDI pin. You have to connect it, as communication is bidirectional. Also, I suggest using the default SPI pinout on ESP32 side as described in the micropython documetation for ESP32.
The SPI startup looks to be missing stuff. Looking at the SPI documentation a call to machine.SPI() requires that you assign values to arguments sck, mosi, miso. Those would probably be the pins on ESP32 side where you've connected SCLK, SDI, SDO on MAX31865 (note mosi means "master out, slave in" and miso is "master in, slave out").
The chip select signal on the MAX is inverted (that's what the line above CS input in the datasheet means). You have to set it low to activate the chip and high to disable it.
You can't just read data out of the chip, it has a protocol you must follow. First you have to request a temperature-to-resistance conversion from the chip. The datasheet for your chip explains how to do that, the relevant info starts on page 13 (it's a bit difficult to read for a beginner, but try anyway as it's the authoritative source of information for this chip). On a high level, it works like this:
Write to Configuration register a value which initiates the conversion.
Wait for the conversion to complete.
Read from the RTD (Resistance-To-Digital) registers to get the conversion result.
Calculate the temperature value from the conversion result.
The code might be closer to this (not tested, and very likely to not work off the bat - but it should convey the idea):
import ubinascii, time
from machine import Pin, SPI
cs = Pin(15, Pin.OUT)
# Assuming you've rewired according to default SPI pinout
spi = machine.SPI(1, baudrate=100000, polarity=0, phase=0, sck=Pin(14), mosi=Pin(13), miso=Pin(12))
# Enable chip
cs.off()
# Prime a 1-shot read by writing 0x40 to Configration register 0x00
spi.write(b'\x00\x40')
# Wait for conversion to complete (up to 66 ms)
time.sleep_ms(100)
# Select the RTD MSBs register (0x01) and read 1 byte from it
spi.write(b'\x01')
msb = spi.read(1)
# Select the RTD LSBs register (0x02) and read 1 byte from it
spi.write(b'\x02')
lsb = spi.read(1)
# Disable chip
cs.on()
# Join the 2 bytes
result = msb * 256 + lsb
print(ubinascii.hexlify(result))
Convert result to temperature according to section "Converting RTD Data Register
Values to Temperature" in datasheet.
Side note 1: here spi = machine.SPI(1, baudrate=5000000, polarity=0, phase=0) you've configured a baud rate of 5MHz which is the maximum for this chip. Depending on how you've connected your devices, it may not work. The SPI protocol is synchronous and driven by master device, so you can set any baud rate you want. Start with a much, much lower value, maybe 100KHz or so. Increase this after you've figured out how to talk to the chip.
Side note 2: if you want your conversion result faster than the 100ms sleep in my code, connect the DRDY line from MAX to ESP32 and wait for it to go low. This means the conversion is finished and you can read out the result immediately.
I'm in a strange situation with an ATMega2560.
I want to save power by going into PowerDown mode. In this mode there are only a few events only which can wake it up.
On USART1 I have an external controller which sends messages to the AVR.
But when USART1 is used I can not use the INT2 and INT3 for external interrupt (=the CPU will not wake up).
So I had an idea to disable the USART1 just right before going into PowerDown mode, and have the INT2 enabled as external interrupt.
Pseudo code for this:
UCSR1B &= ~(1<<RXEN1); //Disable RXEN1: let AVR releasing it
DDRD &= ~(1<<PD2); //Make sure PortD2 is an input - we need it for waking up
EIMSK &= ~(1<<INT2); //Disable INT2 - this needs to be done before changing ISC20 and ISC221
EICRA |= (1<<ISC20)|(1<<ISC21); //Rising edge on PortD2 will generate an interrupt and wake up the AVR from PowerDown
EIMSK |= (1<<INT2); //Now enable INT2
//Sleep routine
cli();
sleep_enable();
sei();
sleep_cpu();
sleep_disable();
In the ISR of INT2, I change everything back to USART1.
Pseudo:
ISR(INT2_vect) {
EIMSK &= ~(1<<INT2); //Disable INT2 to be able to use it as USART1 again
UCSR1B=(1<<RXEN1)|(1<<TXEN1)|(1<<RXCIE1);
}
However it seems to take long time until the USART1 is working correctly again.
There are too many faulty bits in the beginning (after waking up from PowerDown).
How hackish is this?
Is there any reasonable way to make the change faster?
The main idea was to set the 'RX' port to an interrupt which can wake the CPU up then immediately change it back to USART and process it asap.
PS: I really have to use the same pin for this purpose, there is no other available option. So guiding toward using some other pins won't be accepted as an answer.
The Power-down mode disables the oscillator, so you have to wait for a stable oscillator after wakeup.
Please take a look at the datasheet on page 51:
When waking up from Power-down mode, there is a delay from the wake-up condition occurs until the wake-upbecomes effective. This allows the clock to restart and become stable after having been stopped. The wake-upperiod is defined by the same CKSEL Fuses that define the Reset Time-out period, as described in “ClockSources” on page 40.
You have to wait up to 258 clock cykles assuming you use a high speed ceramic oscillator (see table 10-4 on page 42).
You can use the Standby Mode. If you use an external oscillator the CPU enter the Standby Mode, which is identical to Power-down Mode, but the Oscillator isn´t stopped. Furthermore you can set the Power Reduction Register for additional power save options.
Another option is to use the Extended Standby Mode, which is identical to the Power-save Mode. This mode disables the oscillator, but the oscillator wakes up in six clock cycles.
I'm trying to disable all interrupts including NMI's on a single core in a processor and put that core into an infinite loop with a JMP instruction targeting itself (bytecode 0xEBFE) I tried this with the following machine code:
cli
in al, 0x70
mov bl, 0x80
or al, bl
out 0x70, al
jmp self (0xEBFE)
I assumed that disabling NMI interrupts would also disable the watchdog since according to this link the watchdog timer is an NMI interrupt, but what happened when I ran this code is after around 5 seconds my computer bugchecked with code 0x101 CLOCK_WATCHDOG_TIMEOUT. I'm wondering if windows notices that I've disabled NMI interrupts and then re-enables them before initiating the kernel panic. Does anyone know how to disable the watchdog timer in windows 7?
I don't think the NMIs are at fault here.
External NMIs are obsolete, they are hard to route in an SMP system. That watchdog timer is also obsolete, it was either a secondary PIT or a limited fourth channel of the primary PIT:
----------P00440047--------------------------
PORT 0044-0047 - Microchannel - PROGRAMMABLE INTERVAL TIMER 2
SeeAlso: PORT 0040h,PORT 0048h
0044 RW PIT counter 3 (PS/2)
used as fail-safe timer. generates an NMI on time out.
for user generated NMI see at 0462.
0047 -W PIT control word register counter 3 (PS/2, EISA)
bit 7-6 = 00 counter 3 select
= 01 reserved
= 10 reserved
= 11 reserved
bit 5-4 = 00 counter latch command counter 3
= 01 read/write counter bits 0-7 only
= 1x reserved
bit 3-0 = 00
----------P0048004B--------------------------
PORT 0048-004B - EISA - PROGRAMMABLE INTERVAL TIMER 2
Note: this second timer is also supported by many Intel chipsets
SeeAlso: PORT 0040h,PORT 0044h
0048 RW EISA PIT2 counter 3 (Watchdog Timer)
0049 ?? EISA 8254 timer 2, not used (counter 4)
004A RW EISA PIT2 counter 5 (CPU speed control)
004B -W EISA PIT2 control word
These hardware is gone, it's not present on modern systems. I've tested my machine and I don't have it.
Intel chipsets don't have it:
There is only the primary PIT.
Modern timers are the LAPIC timer and the HPET (Linux did even resort to using the PMC registers).
Windows does support an HW WDT, in fact Microsoft went as long as defining an ACPI extension: the WDAT table.
This WDT however can only reboot or shutdown the system, in hardware, without the intervention of the software.
// Configures the watchdog hardware to perform a reboot
// when it is fired.
//
#define WATCHDOG_ACTION_SET_REBOOT 0x11
//
// Determines if the watchdog hardware is configured to perform
// a system shutdown when fired.
//
#define WATCHDOG_ACTION_QUERY_SHUTDOWN 0x12
//
// Configures the watchdog hardware to perform a system shutdown
// when fired.
//
#define WATCHDOG_ACTION_SET_SHUTDOWN 0x13
Microsoft set quite a quit of requirement for this WDT since it must be setup as early as possible in the boot process, before the PnP enumeration (i.e. PCI(e) enumeration).
This is not the timer that bugchecked your system.
By the way, I don't have this timer (my system is missing the WDAT table) and I don't expect it to be found on client hardware.
The bugcheck 0x101 is due to a software WDT, it is raised inside a function in ntoskrnl.exe.
This function is called by KeUpdateRunTime and by another chain of calls starting in DriverEntry:
According to Windows Internals, KeUpdateRunTime is used to update the internal ticks counting of Windows.
I'd expect only a single logical processor to be put in charge of that, though I'm not sure of how exactly Windows housekeeps time.
I'd also expect this software WDT to be implemented in a master-slave fashion: each CPU increments its own counter and a designed CPU check the counters periodically (or any equivalent implementation).
This seems to be suggested by the wording of the documentation of the 0x101 bugcheck:
The CLOCK_WATCHDOG_TIMEOUT bug check has a value of 0x00000101. This indicates that an expected clock interrupt on a secondary processor, in a multi-processor system, was not received within the allocated interval.
Again, I'm not an expert on this part of Windows (The user MdRm, probably is) and this may be utterly wrong, but if it isn't you probably are better of following Alex's advice and boot with one less logical CPU.
You can then execute code on that CPU with an INIT-SIPI-SIPI sequence as described on the Intel's manual but you must be careful because the issuing processor is using paging while the sleeping one is not yet (the processor will start up in real-mode).
Initialising a CPU may be a little cumbersome but not too much after all.
Stealing it may result in other problems besides the WDT, for example if Windows has routed an interrupt to that processor only.
I don't know if there is driver API to unregister a logical processor, I found nothing looking at the exports of hal.dll and ntoskrnl.exe.
I am using Arduino Leonardo to transmit an string to a wifi module. The format of command that wifi module can recognize is:
AT60,1,content to a server
I am using an virtual server(TCP/IP Builder) to test the content I can received.
Here is the content I want to send:
smart/device/deviceCmd?userId=1010002003&deviceId=A00019999990002&cmd=ON
Since I try to send it again and again, I use a loop to send it. In the virtual server side, the content I got is:
smart/device/deviceCmd?userId=1010002003&devceId=A00019999990002&cmd=ON
smart/device/deviceCmd?userId=1010002003&devceId=A00019999990002&cmd=ON
smart/device/deviceCmd?userId=1010002003&dviceId=A00019999990002&cmd=ON
smart/device/deviceCmd?userId=1010002003&eviceId=A00019999990002&cmd=ON
smart/device/deviceCmd?userId=1010002003&devieId=A00019999990002&cmd=ON
smart/device/deviceCmd?userId=1010002003deviceId=A00019999990002&cmd=ON
smart/device/deviceCmd?userId=1010002003&dviceId=A00019999990002&cmd=ON
smart/device/deviceCmd?userId=1010002003&dviceId=A00019999990002&cmd=ON
smart/device/deviceCmd?userId=1010002003&deiceId=A00019999990002&cmd=ON
smart/device/deviceCmd?userId=1010002003&dviceId=A00019999990002&cmd=ON
This is the QUESTION: There exist one terrible mistake in the content I received, which is the deviceId part never correct. It's so weird.
Here is part of related code:
//In Uart.cpp
//These three lines can sent a formatted string as "AT60,1,content"
Serial1.write("AT60,");
Serial1.write(channelID); //channel ID = 1 here
Serial1.write(reportIsFire, 76);
//In Uart.h
//Definition of the string I need to send, which has 76 characters.
char reportIsFire[76] = ",smart/device/deviceCmd?userId=1010002003&deviceId=A00019999990002&cmd=ON \n";
Here is few background of this application:
I am using Arduino 1.5.8 IDE with VisualStudio
Since the serial buffer of Arduino is only 64 Bytes, I have already
change the buffer size to 128 Bytes in "HardwareSerial.h" to send
out this large string.
The baud rate is 115200 and I am using Serial 1. I have used Serial 1
to transmit few other characters and it works fine.
I will appreciate that If you have any idea about this question.
I am betting that the serial baud rate of the Arduino is not 100% correct. Increasing the buffer size will not matter if the data is being lost due to a timing issue in the physical link.
I'd recommend double-checking the code that initializes the serial baud rate generator. It may be possible to get a closer rate to 115,200 by either adjusting the available settings, altering the main clock speed (if possible), implementing some form of flow control, or all of the above.
In extreme cases, you may consider using a special-frequency oscillator. Many Microchip PICs use an internal or external 4MHz or 8MHz crystal, but this can produce far too much timing error for lengthy serial transmissions at high speed. In that case, something special, like a 7.3728MHz crystal can be used, bringing the accuracy to exactly 100% (at least on some PIC devices.)
Lastly, another consideration is if any pre-emptive code is running on the device, such as interrupts or timers which could inadvertently interfere with the serial output.
I don't have an answer, but I suspect the most likely problem is that the Wifi card can't read characters at a sustained 115200 baud rate. If possible, set the Wifi baud rate and the Arduino Serial.begin() to a lower rate, such as 57600 or 19200.
If the Arduino baud rate was simply inaccurate, I'd expect to see the problem appearing at random locations in the string, rather than about 40 characters in.
I can properly read/write to a 2GB Kingston Micro SD using single pin SPI, but after writing using the WRITE_MULTIPLE_BLOCK command to write several blocks, the card goes into idle mode. I know this because when I try send a command to write more data, the card returns an 'in idle state' flag. I created a work around that pulls the card from idle after each write but this severely reduces the bandwidth. Does anyone know why this happens?
Also, the maximum SPI Baud I have obtained is 1Mbs. When I set the SPI clk to >1MHz the commands do not work properly. If I send commands at a baud of < 1Mbs then send the data at >1Mbs, the data is corrupted. Is there a reason I have not been able to get the 25MHz specification speed as listed in the SDCARD.org spec on p2?
https://www.sdcard.org/developers/tech/sdio/sdio_spec/Simplified_SDIO_Card_Spec.pdf
I got SPI Speeds less than 1 MBit/s when I tried to use the wrong SPI clock polarity once. Double check this, and this is also a possible candidate as a source for you "idle" error.