I want to enter the sleep mode with WFI on a stm32f030 (cortex M0).
However my code doesn't seem to work on the stm32f030 but works on an stm32f103.
I think it works because when trying to flash again the f103 (with stlink utility or keil) it doesn't respond and I have to connect under reset which tends to indicate that the cpu is sleeping. But I can connect without problem the f030.
Here is my code:
int main() {
SetupSleep();
__wfi();
while(1){}
}
Here is the content of my SetupSleep() function:
void SetupSleep(void){
SCB->SCR |= (1ul << 1);
SCB->SCR &= ~(1ul << 2);
}
Which according to the page 81 of the f030 programming manual (http://www.st.com/web/en/resource/technical/document/programming_manual/DM00051352.pdf) selects Sleep mode and Sleeponexit.
Does it mean an interrupt occurs that makes the cpu exiting sleep mode ?
It is my first time using the sleep mode so maybe my implementation is not correct.
Instead of manipulating the registers directly, take a look at what the standard peripheral library does. In particular, look at PWR_EnterSleepMode() in stm32f0xx_pwr.c.
At the very least I can see that you're not executing either __WFI() or __WFE() to actually enter sleep mode. There are also the other low power modes: standby and stop that may be of interest to you.
Related
I'm in a strange situation with an ATMega2560.
I want to save power by going into PowerDown mode. In this mode there are only a few events only which can wake it up.
On USART1 I have an external controller which sends messages to the AVR.
But when USART1 is used I can not use the INT2 and INT3 for external interrupt (=the CPU will not wake up).
So I had an idea to disable the USART1 just right before going into PowerDown mode, and have the INT2 enabled as external interrupt.
Pseudo code for this:
UCSR1B &= ~(1<<RXEN1); //Disable RXEN1: let AVR releasing it
DDRD &= ~(1<<PD2); //Make sure PortD2 is an input - we need it for waking up
EIMSK &= ~(1<<INT2); //Disable INT2 - this needs to be done before changing ISC20 and ISC221
EICRA |= (1<<ISC20)|(1<<ISC21); //Rising edge on PortD2 will generate an interrupt and wake up the AVR from PowerDown
EIMSK |= (1<<INT2); //Now enable INT2
//Sleep routine
cli();
sleep_enable();
sei();
sleep_cpu();
sleep_disable();
In the ISR of INT2, I change everything back to USART1.
Pseudo:
ISR(INT2_vect) {
EIMSK &= ~(1<<INT2); //Disable INT2 to be able to use it as USART1 again
UCSR1B=(1<<RXEN1)|(1<<TXEN1)|(1<<RXCIE1);
}
However it seems to take long time until the USART1 is working correctly again.
There are too many faulty bits in the beginning (after waking up from PowerDown).
How hackish is this?
Is there any reasonable way to make the change faster?
The main idea was to set the 'RX' port to an interrupt which can wake the CPU up then immediately change it back to USART and process it asap.
PS: I really have to use the same pin for this purpose, there is no other available option. So guiding toward using some other pins won't be accepted as an answer.
The Power-down mode disables the oscillator, so you have to wait for a stable oscillator after wakeup.
Please take a look at the datasheet on page 51:
When waking up from Power-down mode, there is a delay from the wake-up condition occurs until the wake-upbecomes effective. This allows the clock to restart and become stable after having been stopped. The wake-upperiod is defined by the same CKSEL Fuses that define the Reset Time-out period, as described in “ClockSources” on page 40.
You have to wait up to 258 clock cykles assuming you use a high speed ceramic oscillator (see table 10-4 on page 42).
You can use the Standby Mode. If you use an external oscillator the CPU enter the Standby Mode, which is identical to Power-down Mode, but the Oscillator isn´t stopped. Furthermore you can set the Power Reduction Register for additional power save options.
Another option is to use the Extended Standby Mode, which is identical to the Power-save Mode. This mode disables the oscillator, but the oscillator wakes up in six clock cycles.
I will use the following code to explain my question:
#include <Windows.h>
#include <iostream>
int main()
{
bool toggle = 0;
while (1)
{
if (GetAsyncKeyState('C') & 0x8000)
{
toggle = !toggle;
if (toggle) std::cout << "Pressed\n";
else std::cout << "Not pressed\n";
}
}
}
Testing, I see that
(GetAsyncKeyState('C') & 0x8000) // 0x8000 to see if the most significant bit is 1
has the same behavior as
(GetAsyncKeyState('C'))
However, to achieve the behavior I want, which is the way any text input out there works (it waits like 1 second, and if you are still pressing the button, it starts spamming in a certain rate), I need to write
(GetAsyncKeyState('C') & 1)
The documentation says
The behavior of the least significant bit of the return value is retained strictly for compatibility with 16-bit Windows applications (which are non-preemptive) and should not be relied upon.
Can someone clarify this please?
MSDN tells you why on the same page you linked to!
Although the least significant bit of the return value indicates whether the key has been pressed since the last query, due to the pre-emptive multitasking nature of Windows, another application can call GetAsyncKeyState and receive the "recently pressed" bit instead of your application. The behavior of the least significant bit of the return value is retained strictly for compatibility with 16-bit Windows applications (which are non-preemptive) and should not be relied upon.
GetAsyncKeyState gives you "the interrupt-level state associated with the hardware" and is probably shared by all processes in the window station/session.
The low bit might be connected to the keyboard key repeat delay you can set in Control Panel, but it does not really matter because MSDN tells you to not look at that bit.
GetAsyncKeyState is usually not the correct way to process keyboard input. Console applications should read stdin, or use the console API. GUI applications should use the WM_CHAR/WM_KEYDOWN/WM_KEYUP window messages.
I am working on a Linux driver for usb device which fortunately is identical to that in the usb_skeleton example driver which is part of the standard kernel source.
With the 4.4 kernel, it was a breeze, I simply changed the VID and PID and a few strings and the driver compiled and worked perfectly both on x64 and ARM kernels.
But it turns out I have to make this work with a 3.2 kernel. I have no choice in this. I made the same modifications to the skeleton driver in the 3.2 source. Again, I did not have to change actual code, just the VID, PID and some strings. Although it compiles and loads fine (and shows up in /dev), it permanently hangs in the first attempt to do a read from /dev/myusbdev0.
The following code is from the read function, which is supposed to read from the bulk endpoint. When I attempt to read the device, I see the first message that it is going to block due to ongoing io. Then nothing. The user program trying to read this is hung, and cannot be killed with kill -9. The linux machine cannot even reboot - I have to power cycle. There are no error messages, exceptions or anything like that. It seems fairly certain it is hanging in the part that is commented 'IO May Take Forever'.
My question is: why would there be ongoing IO when no program has done any IO with the driver yet? Can I fix this in driver code, or does the user program have to do something before it can start reading from /dev/myusbdev0 ?
In this case the target machine an embedded ARM device similar to a Beaglebone Black. Incidently, the 4.4 kernel version of this driver works perfectly with on the Beaglebone with the same user-mode test program.
/* if IO is under way, we must not touch things */
retry:
spin_lock_irq(&dev->err_lock);
ongoing_io = dev->ongoing_read;
spin_unlock_irq(&dev->err_lock);
if (ongoing_io) {
dev_info(&interface->dev,
"USB PureView Pulser Receiver device blocking due to ongoing io -%d",
interface->minor);
/* nonblocking IO shall not wait */
if (file->f_flags & O_NONBLOCK) {
rv = -EAGAIN;
goto exit;
}
/*
* IO may take forever
* hence wait in an interruptible state
*/
rv = wait_for_completion_interruptible(&dev->bulk_in_completion);
dev_info(&interface->dev,
"USB PureView Pulser Receiver device completion wait done io -%d",
interface->minor);
if (rv < 0)
goto exit;
/*
* by waiting we also semiprocessed the urb
* we must finish now
*/
dev->bulk_in_copied = 0;
dev->processed_urb = 1;
}
Writing this up as an answer since there was no response to my comments. Kernel commit c79041a4[1], which was added in 3.10, fixes "blocked forever in skel_read". Looking at the code above, I see that the first message can trigger without the second being shown if the device file has the O_NONBLOCK flag set. As described in the commit message, if the completion occurs between read() calls the next read() call will end up at the uninterruptible wait, waiting for a completion which has already occurred.
[1] https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=c79041a4
Obviously I am not sure that this is what you are seeing, but I think there is a good chance. If that is correct then you can apply the change (manually) to your driver and that should fix the problem.
Under what circumstances can the uart_ops.start_tx() operation be called twice in rapid succession on a linux 2.6 serial driver?
There should be no issue calling it in rapid succession any number of times. If done by competing processors, start_tx() does a spinlock on port->lock. If done sequentially, the uart-specific driver checks to see if it has already been started. (From linux-2.6.27.8/drivers/mmc/card/sdio_uart.c):
if (!(port->ier & UART_IER_THRI)) {
port->ier |= UART_IER_THRI;
sdio_out(port, UART_IER, port->ier);
}
From a higher-level perspective, the serial core checks to see if the transmitter is already started, as well as for appropriateness of starting it (linux-2.6.27.8/drivers/serial/serial_core.c):
static void __uart_start(struct tty_struct *tty)
{
struct uart_state *state = tty->driver_data;
struct uart_port *port = state->port;
if (!uart_circ_empty(&state->info->xmit) && state->info->xmit.buf &&
!tty->stopped && !tty->hw_stopped)
port->ops->start_tx(port);
}
I am working in this area on an older kernel, 2.6.10. I too have seen 2 (or more) calls to the driver's start_tx function given one supposed 'write' by user space. Via stty, I turned off any 'opost' in the tty layer. After that, I saw only a single start_tx for each write. I suspect that the line discipline layer is adding calls to start_tx.
Anecdotal I know, but thought it might help.
I have a driver & device that seem to misbehave when the user does any number of complex things (opening large word documents, opening lots of files at once, etc.) -- but does not reliably go wrong when any one thing is repeated. I believe it's because it does not handle high interrupt latency situations gracefully.
Is there a reliable way to increase interrupt latency on Windows XP to test this theory?
I'd prefer to write my test programn in python, but c++ & WinAPI is also fine...
My apologies for not having a concrete answer, but an idea to explore would be to use either c++ or cython to hook into the timer interrupt (the clock tick one) and waste time in there. This will effectively increase latency.
I don't know if there's an existing solution. But you may create your own one.
On Windows all the interrupts are prioritized. So that if there's a driver code running on a high IRQL, your driver won't be able to serve your interrupt if its level is lower. At least it won't be able to run on the same processor.
I'd do the following:
Configure your driver to run on a single processor (don't remember how to do this, but such an option definitely exists).
Add an I/O control code to your driver.
In your driver's Dispatch routine do a busy wait on a high IRQL (more about this later)
Call your driver (via DeviceIoControl) to simulate a stress.
The busy wait may look something like this:
KIRQL oldIrql;
__int64 t1, t2;
KeRaiseIrql(31, &oldIrql);
KeQuerySystemTime((LARGE_INTEGER*) &t1);
while (1)
{
KeQuerySystemTime((LARGE_INTEGER*) &t2);
if (t1 - t1 > /* put the needed time interval */)
break;
}
KeLowerIrql(oldIrql);