void TimerFunction()
{
TIMSK=(1<<TOIE0);
TCNT0=0x00;
TCCR0 |= (0<<CS02) | (1<<CS00) | (0<<CS01);
}
//##############################################################################
ISR(TIMER0_OVF_vect)
{
// process the timer0 overflow here
countClock ++;
count++;
delay++;
//some extra code
}
then
void main()
{
//someCode
TimerFunction();
}
but it doesnt work for me ,so is that the right way to start the timer 0 and its interrupt service routine ??
At first sight I'd say you miss
sei(); // set global interrupt flag
if this is not within //someCode ... in any case I recommend turning on the global interrupt enable flag only after initializing all specific interrupt sources (timers, USART, etc)
Yes, in your code global interrupt flag is not set. If solutions which purposed MikeD are not working, try this:
asm{sei};
use SREG.SREG_I = 1; to enable global interrupts
Related
I am using Pic controller (PIC16F15325) in simulaotor window, and I am facing one issue regarding Sleep functionality. I have declared Pin "RA2" as external interrupt pin (High to low transition) and I am changing forcefully value of "RA2" from 1 to 0, from variable window. After doing that "ISR" is never got called.
All the Initialization code i have used, it is generated from MPLAB code configurator only. Can anyone tell me the reason, why interrupt is not generated after triggering the value.
I am putting my sample code here, which i used for testing:
/* code */
SYSTEM_Initialize();
INTERRUPT_GlobalInterruptEnable();
INTERRUPT_PeripheralInterruptEnable();
while (1)
{
if(PORTAbits.RA2 == 1)
{
SLEEP();
__nop();
}
else
{
PORTCbits.RC3 = 1;
PORTCbits.RC4 = 1;
}
}
I get some stupid error's if I want to try initialise the connection from the TWI master to the bus. The start condition will be send but the processor waits in the infinity loop bevor starting to send the slave address to the bus.
I also have analysed the signals on the bus and one result is that the clock is running but there will be no data send on the bus.
The processor wait's in the line with the marked arrow.
We use the following code to start the and initialise the bus ...
void i2c_master_init() {
TWBR = (uint8_t)TWBR_val;
}
void i2c_master_stop() {
TWCR = (1<<TWINT) | (1<<TWEN) | (1<<TWSTO);
}
uint8_t i2c_master_start(uint8_t address) {
TWCR = 0;
TWCR |= (1<<TWSTA);
TWCR |= (1<<TWEN);
TWCR |= (1<<TWINT);
while( !(TWCR & (1<<TWINT)) ); <--
[...]
}
Currently I don't know, what's going wrong with the code. Or am I doing something else wrong. Can anyone help me?
Thank you in anticipation.
My best guess without hardware on my bench is that you should set all flags to TWI control registry at once by TWCR = (1<<TWINT)|(1<<TWSTA)|(1<<TWEN). Meanwhile you set them one by one in 3 separate operations (multiple clock cycles), while datasheet implicitly says flags must be set together, see also datasheet examples.
I am writing an I2C slave routine for PIC18F25K80 and I am stuck on a weird problem.
This is my routine:
void interrupt interruption_handler() {
PIE1bits.SSPIE = 0; // Disable Master Synchronous Serial Port Interrupt
if (PIR1bits.SSPIF != 1) {
//This is not I2C interruption;
PIE1bits.SSPIE = 1; // Enable Master Synchronous Serial Port Interrupt
return;
}
//Treat overflow
if ((SSPCON1bits.SSPOV) || (SSPCON1bits.WCOL)) {
dummy = SSPBUF; // Read the previous value to clear the buffer
SSPCON1bits.SSPOV = 0; // Clear the overflow flag
SSPCON1bits.WCOL = 0; // Clear the collision bit
SSPCON1bits.CKP = 1;
board_state = BOARD_STATE_ERROR;
} else {
if (!SSPSTATbits.D_NOT_A) {
//Slave address
debug(0, ON);
//Read address
address = SSPBUF; //Clear BF
while(BF); //Wait until completion
if (SSPSTATbits.R_NOT_W) {
SSPCON1bits.WCOL = 0;
unsigned char a = 0x01;
SSPBUF = a;//0x01 works //Deliver first byte
asm("nop");
}
} else {
if (SSPSTATbits.BF) {
dummy = SSPBUF; // Clear BF (just in case)
while(BF);
}
if (SSPSTATbits.R_NOT_W) {
//Multi-byte read
debug(1, ON);
SSPCON1bits.WCOL = 0;
SSPBUF = 0x02; //Deliver second byte
asm("nop");
} else {
//WRITE
debug(2, ON);
}
}
transmitted = TRUE;
SSPCON1bits.CKP = 1;
PIR1bits.SSPIF = 0;
PIE1bits.SSPIE = 1; // Enable Master Synchronous Serial Port Interrupt
}
}
It works like a charm if I set constant values on SSPBUF. For example, if you do:
SSPBUF = 0x01;
(...)
SSPBUF = 0x02;
I get the two bytes on the master. I can even see the wave forms of the bytes being transmitted on the oscilloscope. Quite fun!
But when I try to set SSPBUF using a variable like:
unsigned char a = 0x01;
SSPBUF = a;
I get zero on the master.
It is driving me crazy.
Some hypothesis I've discarded:
Watchdog timer is messing up interrupting in the middle of the protocol: It is not. It is disabled and the problem happens in both SSPBUF assignments
I need to wait until BF goes low to continue: I don't. AFAIK, you setup the SSPBUF, clear SSPIF, set CKP and return from interruption to take care of life in 4Mhz while the hardware send data in few Khz. It will interrupt you again when it finishes.
It makes no sense to me. How good it is if you cannot define an arbitrary value using a variable?
Please gurus out there, enlighten this poor programmer.
Thanks in advance.
It has something to do with how the compiler generates the code and some undocumented/unknown PIC restriction around SSPBUF (it is an special register anyway).
I found out that it works when the compiler uses movwf and does not work when the compiler uses movff.
I moved the question to another forum because I realized the audience there is more adequate.
You will find more details here:
https://electronics.stackexchange.com/questions/251763/writing-sspbuf-from-variable-in-i2c-slave-protocol-in-pic18/251771#251771
Try move declaration : "unsigned char a = 0x01;"
to the beginning of the function or try define it as volatile global variable.
take into accunte that SSPBUF is both read and write buffer.check if there are conditions that may cause I2C module to reset this buffer.
I have bumped into a bit inconsistent IRQ/ISR performance on Freescales imx.233 running linux kernel (3.8.13) with CONFIG_PREEMPT_RT patches.
I am little bit surprised why this processor (ARM9, 454mhz) is unable to keep up even with 74kHz IRQ requests.. ?
In my kernel config I have set following flags:
CONFIG_TINY_PREEMPT_RCU=y
CONFIG_PREEMPT_RCU=y
CONFIG_PREEMPT=y
CONFIG_PREEMPT_RT_BASE=y
CONFIG_HAVE_PREEMPT_LAZY=y
CONFIG_PREEMPT_LAZY=y
CONFIG_PREEMPT_RT_FULL=y
CONFIG_PREEMPT_COUNT=y
CONFIG_DEBUG_PREEMPT=y
On the system there is basically nothing running (created by buildroot), and I set PWM to generate a pulse of 74kHz, that serves as interrupt.
Then in the ISR, I just trigger another GPIO output pin, and check the output.
What I find is that sometimes I miss an interrupt -
You can see the missed interrupt here:
And also the the triggering of output pin seems to be a bit inconsistent, the output pin is triggered usually within "5% window", that might still be acceptable. But I worry, that when I start implementing data transfer logic, instead of just triggering the pin, I might run into further problems...
My simple driver code looks like this:
#needed includes
uint16_t INPUT_IRQ = 39;
uint16_t OUTPUT_GPIO = 38;
struct test_device *device;
//Prototypes
void irqtest_exit(void);
int irqtest_init(void);
void free_device(void);
//Default functions
module_init(irqtest_init);
module_exit(irqtest_exit);
//triggering flag
uint16_t pulse = 0x1;
irqreturn_t irq_handle_function(int irq, void *device_id)
{
pulse = !pulse;
gpio_set_value(OUTPUT_GPIO, pulse);
return IRQ_HANDLED;
}
struct test_device {
int huuhaa;
};
void free_device() {
if (device)
kfree(device);
}
int irqtest_init(void) {
int result = 0;
device = kmalloc(sizeof *device, GFP_KERNEL);
device->huuhaa = 10;
printk("IRB/irqtest_init: Inserting IRQ module\n");
printk("IRB/irqtest_init: Requesting GPIO (%d)\n", INPUT_IRQ);
result = gpio_request_one(INPUT_IRQ, GPIOF_IN, "PWM input");
if (result != 0) {
free_device();
printk("IRB/irqtest_init: Failed to set GPIO (%d) as input.. exiting\n", INPUT_IRQ);
return -EINVAL;
}
result = gpio_request_one(OUTPUT_GPIO, GPIOF_OUT_INIT_LOW , "IR OUTPUT");
if (result != 0) {
free_device();
printk("IRB/irqtest_init: Failed to set GPIO (%d) as output.. exiting\n", OUTPUT_GPIO);
return -EINVAL;
}
//Set our desired interrupt line as input
result = gpio_direction_input(INPUT_IRQ);
if (result != 0) {
printk("IRB/irqtest_init: Failed to set IRQ as input.. exiting\n");
free_device();
return -EINVAL;
}
//Set flags for our interrupt, guessing here..
irq_flags |= IRQF_NO_THREAD;
irq_flags |= IRQF_NOBALANCING;
irq_flags |= IRQF_TRIGGER_RISING;
irq_flags |= IRQF_NO_SOFTIRQ_CALL;
//register interrupt
result = request_irq(gpio_to_irq(INPUT_IRQ), irq_handle_function, irq_flags, "irq testing", device);
if (result != 0) {
printk("IRB/irqtest_init: Failed to reserve GPIO 38\n");
return -EINVAL;
}
printk("IRB/irqtest_init: insert success\n");
return 0;
}
void irqtest_exit(void) {
if (device)
kfree(device);
gpio_free(INPUT_IRQ);
gpio_free(OUTPUT_GPIO);
printk("IRB/irqtest_exit: Removing irqtest module\n");
}
int irqtest_open(struct inode *inode, struct file *filp) {return 0;}
int irqtest_release(struct inode *inode, struct file *filp) {return 0;}
In the system, I have following interrupts registered, after the driver is loaded:
# cat /proc/interrupts
CPU0
16: 36379 - MXS Timer Tick
17: 0 - mxs-spi
18: 2103 - mxs-dma
60: 0 gpio-mxs irq testing
118: 0 - mxs-spi
119: 0 - mxs-dma
120: 0 - RTC alarm
124: 0 - 8006c000.serial
127: 68050 - uart-pl011
128: 151 - ci13xxx_imx
Err: 0
I wonder if the flags I declare to my IRQ are good ? I noticed that with this configuration, I can no longer reach console, so kernel seems totally consumed with servicing this 74kHz trigger now.. this can't be right ?
I suppose it's not a big deal for me since this is only during data transfer, but still I feel I'm doing something wrong..
Also, I wonder if it would be more efficient to map the registers with ioremap, and trigger the output with direct memory writes ?
Is there some way I could increase the priority of the interrupt even higher ? Or could I somehow lock the kernel for the duration of the data transfer (~400ms), and generate somehow else my timing for the output ?
Edit: Forgot to add /proc/interrupts output to the question...
What you experience here is interrupt jitter. This is to be expected on Linux, because the kernel regularly disables the interrupts for various tasks (entering a spinlock, handling an interrupt, etc.).
This will happen, regardless wether you have PREEMPT_RT or not, so expecting to generate 74kHz signal with regular interrupts is pretty much unrealistic.
Now, ARM has higher priority interrupts called FIQs, that will never be masked or disabled.
Linux doesn't use FIQ, and is not built to deal with the fact that an FIQ could be used, so you won't be able to use the generic kernel framework.
From Linux driver development point of view however, it's not really different as long as you keep this in mind: you have to write a handler, and associate it to an IRQ. You'll also have to poke into the interrupt controller to make it generate a FIQ for the interrupt you want to use (the details on how to change it are platform-dependant. Some platforms have functions to do that (like imx25 and mxc_set_irq_fiq), some others don't. imx23/28 don't, so you'll have to do it by hand).
The only thing that the functions to setup a fiq handler only work with a assembly-written handler, so you'll have to rewrite your handler in assembly (with your current code, it should be trivial though).
You can grab additional details to the blog post Alexandre posted (http://free-electrons.com/blog/fiq-handlers-in-the-arm-linux-kernel/), where you'll find working code, samples, and explanations on how it all works together.
You can have a look at what my colleague Maxime Ripard did using an FIQ on a similar SoC (i.mx28) :
http://free-electrons.com/blog/fiq-handlers-in-the-arm-linux-kernel/
Try this flags:
int irq_flags;
...
irq_flags = IRQF_TRIGGER_RISING | IRQF_EARLY_RESUME
I had a kernel 3.8.11 and can't find IRQF_NO_SOFTIRQ_CALL define. It's only for 3.8.13?
Also I didn't see irq_flags define. Where is it?
I am doing a performance evaluation between Windows CE and Linux on an arm imx27 board. The code has already been written for CE and measures the time it takes to do different kernel calls like using OS primitives like mutex and semaphores, opening and closing files and networking.
During my porting of this application to Linux (pthreads) I stumbled upon a problem which I cannot explain. Almost all tests showed a performance increase from 5 to 10 times but not my version of win32 events (SetEvent and WaitForSingleObject), CE actually "won" this test.
To emulate the behaviour I was using pthreads condition variables (I know that my implementation doesn't fully emulate the CE version but it's enough for the evaluation).
The test code uses two threads that "ping-pong" each other using events.
Windows code:
Thread 1: (the thread I measure)
HANDLE hEvt1, hEvt2;
hEvt1 = CreateEvent(NULL, FALSE, FALSE, TEXT("MyLocEvt1"));
hEvt2 = CreateEvent(NULL, FALSE, FALSE, TEXT("MyLocEvt2"));
ResetEvent(hEvt1);
ResetEvent(hEvt2);
for (i = 0; i < 10000; i++)
{
SetEvent (hEvt1);
WaitForSingleObject(hEvt2, INFINITE);
}
Thread 2: (just "responding")
while (1)
{
WaitForSingleObject(hEvt1, INFINITE);
SetEvent(hEvt2);
}
Linux code:
Thread 1: (the thread I measure)
struct event_flag *event1, *event2;
event1 = eventflag_create();
event2 = eventflag_create();
for (i = 0; i < 10000; i++)
{
eventflag_set(event1);
eventflag_wait(event2);
}
Thread 2: (just "responding")
while (1)
{
eventflag_wait(event1);
eventflag_set(event2);
}
My implementation of eventflag_*:
struct event_flag* eventflag_create()
{
struct event_flag* ev;
ev = (struct event_flag*) malloc(sizeof(struct event_flag));
pthread_mutex_init(&ev->mutex, NULL);
pthread_cond_init(&ev->condition, NULL);
ev->flag = 0;
return ev;
}
void eventflag_wait(struct event_flag* ev)
{
pthread_mutex_lock(&ev->mutex);
while (!ev->flag)
pthread_cond_wait(&ev->condition, &ev->mutex);
ev->flag = 0;
pthread_mutex_unlock(&ev->mutex);
}
void eventflag_set(struct event_flag* ev)
{
pthread_mutex_lock(&ev->mutex);
ev->flag = 1;
pthread_cond_signal(&ev->condition);
pthread_mutex_unlock(&ev->mutex);
}
And the struct:
struct event_flag
{
pthread_mutex_t mutex;
pthread_cond_t condition;
unsigned int flag;
};
Questions:
Why doesn't I see the performance boost here?
What can be done to improve performance (e.g are there faster ways to implement CEs behaviour)?
I'm not used to coding pthreads, are there bugs in my implementation maybe resulting in performance loss?
Are there any alternative libraries for this?
Note that you don't need to be holding the mutex when calling pthread_cond_signal(), so you might be able to increase the performance of your condition variable 'event' implementation by releasing the mutex before signaling the condition:
void eventflag_set(struct event_flag* ev)
{
pthread_mutex_lock(&ev->mutex);
ev->flag = 1;
pthread_mutex_unlock(&ev->mutex);
pthread_cond_signal(&ev->condition);
}
This might prevent the awakened thread from immediately blocking on the mutex.
This type of implementation only works if you can afford to miss an event. I just tested it and ran into many deadlocks. The main reason for this is that the condition variables only wake up a thread that is already waiting. Signals issued before are lost.
No counter is associated with a condition that allows a waiting thread to simply continue if the condition has already been signalled. Windows Events support this type of use.
I can think of no better solution than taking a semaphore (the POSIX version is very easy to use) that is initialized to zero, using sem_post() for set() and sem_wait() for wait(). You can surely think of a way to have the semaphore count to a maximum of 1 using sem_getvalue()
That said I have no idea whether the POSIX semaphores are just a neat interface to the Linux semaphores or what the performance penalties are.