Need example code for usage and receiving a watchdog timer pretimeout event from my Linux application - embedded-linux

I have enabled the watchdog timer in Linux from my application. I have set the timeout at 15 seconds and the pretimeout for 2 seconds before that. I am able to control the watchdog properly and kick my watchdog just fine. When the watchdog pretimeout goes off I want to receive a notification so that I can catch it in my application and log some info to a file before the final watchdog timer goes off and reboots the system. Unfortunately I have not been able to find any example code that shows how to register to receive the pretimeout event. I am using open and ioctl to configure the watchdog timer. See code snippet below:
// Open the connection to the watchdog timer
this->_fdWDT = open("/dev/watchdog", O_WRONLY);
// Ensure we were able to open a connection to the watchdog timer
if (this->_fdWDT == -1)
{
// Ensure the WDT is disabled before attempting to
// set the timeout interval
flags = WDIOS_DISABLECARD;
retValue = ioctl(this->_fdWDT, WDIOC_SETOPTIONS, &flags);
if (retValue == ERROR_NONE)
{
flags = TWO_SEC_TIMEOUT;
retValue = ioctl(this->_fdWDT, WDIOC_SETPRETIMEOUT, &flags);
}
if (retValue == ERROR_NONE)
{
flags = FIFTEEN_SEC_TIMEOUT;
retValue = ioctl(this->_fdWDT, WDIOC_SETTIMEOUT, &flags);
}
if (retValue != ERROR_NONE)
{
// Set our internal error
retValue = WDT_CONFIG_FAILURE;
}
}
Any help will be greatly appreciated.
Thanks
MMP

Related

SPI implementation stuck on “while(!spi_is_tx_empty(WINC1500_SPI));”

I'm currently implementing a driver for the WINC1500 to be used with an ATMEGA32 MCU and it's getting stuck on this line of "while(!spi_is_tx_empty(WINC1500_SPI));". The code builds and runs but it won't clear what's inside in this function to proceed through my code and boot up the Wifi Module. I've been stuck on this problem for weeks now with no progress and don't know how to clear it.
static inline bool spi_is_tx_empty(volatile avr32_spi_t *spi)
{
// 1 = All Transmissions complete
// 0 = Transmissions not complete
return (spi->sr & AVR32_SPI_SR_TXEMPTY_MASK) != 0;
}
Here is my implementation of the SPI Tx/Rx function
void m2mStub_SpiTxRx(uint8_t *p_txBuf,
uint16_t txLen,
uint8_t *p_rxBuf,
uint16_t rxLen)
{
uint16_t byteCount;
uint16_t i;
uint16_t data;
// Calculate the number of clock cycles necessary, this implies a full-duplex SPI.
byteCount = (txLen >= rxLen) ? txLen : rxLen;
// Read / Transmit.
for (i = 0; i < byteCount; ++i)
{
// Wait for transmitter to be ready.
while(!spi_is_tx_ready(WINC1500_SPI));
// Transmit.
if (txLen > 0)
{
// Send data from the transmit buffer
spi_put(WINC1500_SPI, *p_txBuf++);
--txLen;
}
else
{
// No more Tx data to send, just send something to keep clock active.
// Here we clock out a don't care byte
spi_put(WINC1500_SPI, 0x00U);
// Not reading it back, not being cleared 16/1/2020
}
// Reference http://asf.atmel.com/docs/latest/avr32.components.memory.sdmmc.spi.example.evk1101/html/avr32_drivers_spi_quick_start.html
// Wait for transfer to finish, stuck on here
// Need to clear the buffer for it to be able to continue
while(!spi_is_tx_empty(WINC1500_SPI));
// Wait for transmitter to be ready again
while(!spi_is_tx_ready(WINC1500_SPI));
// Send dummy data to slave, so we can read something from it.
spi_put(WINC1500_SPI, 0x00U); // Change dummy data from 00U to 0xFF idea
// Wait for a complete transmission
while(!spi_is_tx_empty(WINC1500_SPI));
// Read or throw away data from the slave as required.
if (rxLen > 0)
{
*p_rxBuf++ = spi_get(WINC1500_SPI);
--rxLen;
}
else
{
spi_get(WINC1500_SPI);
}
}
Debug output log
Disable SPI
Init SPI module as master
Configure SPI and Clock settings
spi_enable(WINC1500_SPI)
InitStateMachine()
INIT_START_STATE
InitStateMachine()
INIT_WAIT_FOR_CHIP_RESET_STATE
m2mStub_PinSet_CE
m2mStub_PinSet_RESET
m2mStub_GetOneMsTimer();
SetChipHardwareResetState (CHIP_HARDWARE_RESET_FIRST_DELAY_1MS)
InitStateMachine()
INIT_WAIT_FOR_CHIP_RESET_STATE
if(m2m_get_elapsed_time(startTime) >= 2)
m2mStub_PinSet_CE(M2M_WIFI_PIN_HIGH)
startTime = m2mStub_GetOneMsTimer();
SetChipHardwareResetState(CHIP_HARDWARE_RESET_SECOND_DELAY_5_MS);
InitStateMachine()
INIT_WAIT_FOR_CHIP_RESET_STATE
m2m_get_elapsed_time(startTime) >= 6
m2mStub_PinSet_RESET(M2M_WIFI_PIN_HIGH)
startTime = m2mStub_GetOneMsTimer();
SetChipHardwareResetState(CHIP_HARDWARE_RESET_FINAL_DELAY);
InitStateMachine()
INIT_WAIT_FOR_CHIP_RESET_STATE
m2m_get_elapsed_time(startTime) >= 10
SetChipHardwareResetState(CHIP_HARDWARE_RESET_COMPLETE)
retVal = true // State machine has completed successfully
g_scanInProgress = false
nm_spi_init();
reg = spi_read_reg(NMI_SPI_PROTOCOL_CONFIG)
Wait for a complete transmission
Wait for transmitter to be ready
SPI_PUT(WINC1500_SPI, *p_txBuf++);
--txLen;
Wait for transfer to finish, stuck on here
Wait for transfer to finish, stuck on here
The ATmega32 is an 8-bit AVR but you seem to be using code for the AVR32, a family of 32-bit AVRs. You're probably just using the totally wrong code and you should consult the datasheet of the ATmega32, and search for SPI for the AVR ATmega family.

Sleep functionality is not working in Interrupt service routine in PIC controller

I am using Pic controller (PIC16F15325) in simulaotor window, and I am facing one issue regarding Sleep functionality. I have declared Pin "RA2" as external interrupt pin (High to low transition) and I am changing forcefully value of "RA2" from 1 to 0, from variable window. After doing that "ISR" is never got called.
All the Initialization code i have used, it is generated from MPLAB code configurator only. Can anyone tell me the reason, why interrupt is not generated after triggering the value.
I am putting my sample code here, which i used for testing:
/* code */
SYSTEM_Initialize();
INTERRUPT_GlobalInterruptEnable();
INTERRUPT_PeripheralInterruptEnable();
while (1)
{
if(PORTAbits.RA2 == 1)
{
SLEEP();
__nop();
}
else
{
PORTCbits.RC3 = 1;
PORTCbits.RC4 = 1;
}
}

I²C Master Write with PIC18F45K50 : keeps SCL low

I'm writing my own I²C Master Write function according to Microchip's datasheet. I'm using MPLAB X. I generated the configuration with the Code Configurator, but here are the interesting bits :
// R_nW write_noTX; P stopbit_notdetected; S startbit_notdetected; BF RCinprocess_TXcomplete; SMP Standard Speed; UA dontupdate; CKE disabled; D_nA lastbyte_address;
SSP1STAT = 0x80;
// SSPEN enabled; WCOL no_collision; CKP Idle:Low, Active:High; SSPM FOSC/4_SSPxADD_I2C; SSPOV no_overflow;
SSP1CON1 = 0x28;
// SBCDE disabled; BOEN disabled; SCIE disabled; PCIE disabled; DHEN disabled; SDAHT 100ns; AHEN disabled;
SSP1CON3 = 0x00;
// Baud Rate Generator Value: SSP1ADD 80;
SSP1ADD = 0x50;
// clear the master interrupt flag
PIR1bits.SSP1IF = 0;
// enable the master interrupt
PIE1bits.SSP1IE = 1;
So : Standard Speed, 100ns hold time, Master Mode, clokck frequency about 50kHz.
I tried to follow the procedure described p238 of the datasheet :
http://ww1.microchip.com/downloads/en/DeviceDoc/30000684B.pdf
Here's my code :
#include "mcc_generated_files/mcc.h"
#include <stdio.h>
#define _XTAL_FREQ 16000000
#define RTS_PIN PORTDbits.RD3
#define CTS_PIN PORTDbits.RD2
#define LED_PIN PORTAbits.RA1
#define RX_FLAG PORTAbits.RA2
uint8_t c;
// Define putch() for printf())
void putch(char c)
{
EUSART1_Write(c);
}
void main(void)
{
// Initialize the device
SYSTEM_Initialize();
while (1)
{
// Generate a START condition by setting Start Enable bit
SSP1CON2bits.SEN = 1;
// Wait for START to be completed
while(!PIR1bits.SSPIF);
// Clear flag
PIR1bits.SSPIF = 0;
// Load the address + RW byte in SSP1BUF
// Address = 85 ; request type = WRITE (0)
SSP1BUF = 0b10101010;
// Wait for ack
while (SSP1CON2bits.ACKSTAT);
// Wait for MSSP interrupt
while (!PIR1bits.SSPIF);
// Load data (0x11) in SSP1BUF
SSP1BUF = 0x11;
// Wait for ack
while (SSP1CON2bits.ACKSTAT);
// Generate a STOP condition
SSP1CON2bits.PEN = 1;
// Wait for STOP to be completed
while(!PIR1bits.SSPIF);
// Clear flag
PIR1bits.SSPIF = 0;
// Wait for 1s before sending the next byte
__delay_ms(1000);
}
}
The slave device is an Arduino which I have tested with another Arduino (Master) to make sure it's working correctly.
My problem is : analysing the SDA/SCL signals with a logic analyser, when I start the PIC I get 2 correct messages, that's with correct address send and byte transmission, but at the end of the second SCL is held LOW, which makes all other writings bad (can't have a proper START condition if SCL is held LOW). BTW, at the end of the first transmission, SCL is held LOW for like 3ms, but then comes HIGH again without any reason.
Can anyone here point what I'm doing wrong ? Did I forget something ?
Thanx in advance.
Best regards.
Eric
PS : when testing the slave with another Arduino as the Master, SCL is set HIGH as soon as the transmission is over.
One thing I'm noticing is that after sending the slave address you are waiting for the ACK (ACKSTAT) then waiting for the SSPIF Interrupt Flag, but you are not checking for SSPIF after the data byte. You are only checking ACKSTAT. Maybe try waiting for and clearing the SSPIF before setting PEN to assert the stop conditon?
Have you checked the state of the SSPCON and SSPSTAT registers when this behavior occurs, that might help narrow down where the problem lies.
Thanx a lot for your answer !
I cleared SSP1IF after loading the data byte, and now it's working fine !
I think I understand now what was happening : the datasheet indicates that ACKSTAT is the only register bit that reacts on the rising edge of SCL, instead of the falling edge for the other bits. So in my code, I generate the STOP condition too early, and that might make it inoperative. Thus no STOP condition is generated, SCL is stuck LOW, and the next transmission cannot be started.
Furthermore, when I wait for the STOP condition to be completed, the SSP1IF flag is still set, so he doesn't actually wait and jumps directly to the delay() function. I don't know if that matters as he waits anyway, but it could matter if ever I tried to send packets one after the other.
So I here's the function I wrote, and which is working :
(BTW it can take up to 255 data bytes)
void MasterWrite(char _size, char* _data)
{
// Generate a START condition by setting Start Enable bit
SSP1CON2bits.SEN = 1;
// Wait for START to be completed
while(!PIR1bits.SSPIF);
// Clear flag
PIR1bits.SSPIF = 0;
// Load the address + RW byte in SSP1BUF
// Address = 85 ; request type = WRITE (0)
SSP1BUF = 0b10101010;
// Wait for ack
while (SSP1CON2bits.ACKSTAT);
// Wait for MSSP interrupt
while (!PIR1bits.SSPIF);
// Clear flag
PIR1bits.SSPIF = 0;
for (int i=0; i<_size; i++)
{
// Load data in SSP1BUF
SSP1BUF = *(_data+i);
// Wait for ack
while (SSP1CON2bits.ACKSTAT);
// Wait for MSSP interrupt
while (!PIR1bits.SSPIF);
// Clear flag
PIR1bits.SSPIF = 0;
}
// Generate a STOP condition
SSP1CON2bits.PEN = 1;
// Wait for STOP to be completed
while(!PIR1bits.SSPIF);
// Clear flag
PIR1bits.SSPIF = 0;
}
Thanx a lot again for your help !
Best regards.
Eric

How to close a serial communication in Cocoa background thread

I'm trying to run a serial communication example, in order to send data from an Arduino to a Cocoa application following the provided code in http://playground.arduino.cc/Interfacing/Cocoa ( IOKit/ioctl method ). It works, but I cannot stop the receiver thread once started.
I've implemented a switch button ( Start/Stop ) which at start time opens the serial port and launches the receiver thread:
- (IBAction) startButton: (NSButton *) btn {
(…)
error = [self openSerialPort: [SelectPort titleOfSelectedItem] baud:[Baud intValue]];
(…)
[self refreshSerialList:[SelectPort titleOfSelectedItem]];
[self performSelectorInBackground:#selector(incomingTextUpdateThread:) withObject:[NSThread currentThread]];
(…)
}
The thread code is practically the same as in the example, excepting that I've included code to rebuild the serial packet from received buffers and save it to a SQLite database:
- (void)incomingTextUpdateThread: (NSThread *) parentThread {
// mark that the thread is running
readThreadRunning = TRUE;
const int BUFFER_SIZE = 100;
char byte_buffer[BUFFER_SIZE]; // buffer for holding incoming data
int numBytes=0; // number of bytes read during read
(…)
// assign a high priority to this thread
[NSThread setThreadPriority:1.0];
// this will loop until the serial port closes
while(TRUE) {
// read() blocks until some data is available or the port is closed
numBytes = (int) read(serialFileDescriptor, byte_buffer, BUFFER_SIZE); // read up to the size of the buffer
if(numBytes>0) {
// format serial data into packets, but first append at start the end of last read
buffer = [[NSMutableString alloc] initWithBytes:byte_buffer length:numBytes encoding:NSASCIIStringEncoding];
if (status == 1 && [ipacket length] != 0) {
[buffer insertString:ipacket atIndex:0];
numBytes = (int) [buffer length];
}
ipacket = [self processSerialData:buffer length:numBytes]; // Recompose data and save to database.
} else {
break; // Stop the thread if there is an error
}
}
// make sure the serial port is closed
if (serialFileDescriptor != -1) {
close(serialFileDescriptor);
serialFileDescriptor = -1;
}
// mark that the thread has quit
readThreadRunning = FALSE;
}
I try to close the port in the main thread with this code, also part of the startButton selector, following the provided example:
if (serialFileDescriptor != -1) {
[self appendToIncomingText:#"Trying to close the serial port...\n"];
close(serialFileDescriptor);
serialFileDescriptor = -1;
// Revisar... crec que el thread no s'adona que s'ha tancat el file descriptor...
// wait for the reading thread to die
while(readThreadRunning);
// re-opening the same port REALLY fast will fail spectacularly... better to sleep a sec
sleep(0.5);
//[btn setTitle:#"Start"];
[Start setTitle:#"Start"];
}
But it seems that the receiver thread is not aware of the status change in global variable serialFileDescriptor.
So, startButton: opens the port, spawns off a thread to start reading from it, and then immediately closes the port? That's not going to turn out well.
startButton: should not close the port. Leave that for the reading thread to do when it's done, and do it on the main thread only when you need to close the port for some other reason (e.g., quitting).
Global variables are, by definition, visible throughout the program, and this includes across thread boundaries. If readThreadRunning is not getting set to FALSE (which assumes that FALSE hasn't been defined to something exotic), then your read thread's loop must still be running. Either it is still reading data, or read is blocked (it is waiting for more data).
Note that read has no way to know whether there will be more data. As your comment in the code says, it will block until either it has some data to return or the port gets closed. You should either work out a way to know ahead of time how much data you'll need to read, and stop when you've read that much, or see if you can close the port at the opposite end when everything has been sent and received.

Monitoring UDP socket in glib(mm) eats up CPU time

I have a GTKmm Windows application (built with MinGW) that receives UDP packets (no sending). The socket is native winsock and I use glibmm IOChannel to connect it to the application main loop. The socket is read with recvfrom.
My problem is: this setup eats 25% percent CPU time on a 3GHz workstation. Can somebody tell me why?
The application is idle in this case, and if I remove the UDP code, CPU usage drops down to almost zero. As the application has to perform some CPU intensive tasks, I could image better ways to spend that 25%
Here are some code excerpts: (sorry for the printf's ;) )
/* bind */
void UDPInterface::bindToPort(unsigned short port)
{
struct sockaddr_in target;
WSADATA wsaData;
target.sin_family = AF_INET;
target.sin_port = htons(port);
target.sin_addr.s_addr = 0;
if ( WSAStartup ( 0x0202, &wsaData ) )
{
printf("WSAStartup failed!\n");
exit(0); // :)
WSACleanup();
}
sock = socket( AF_INET, SOCK_DGRAM, 0 );
if (sock == INVALID_SOCKET)
{
printf("invalid socket!\n");
exit(0);
}
if (bind(sock,(struct sockaddr*) &target, sizeof(struct sockaddr_in) ) == SOCKET_ERROR)
{
printf("failed to bind to port!\n");
exit(0);
}
printf("[UDPInterface::bindToPort] listening on port %i\n", port);
}
/* read */
bool UDPInterface::UDPEvent(Glib::IOCondition io_condition)
{
recvfrom(sock, (char*)buf, BUF_SIZE*4, 0, NULL, NULL);
/* process packet... */
}
/* glibmm connect */
Glib::RefPtr channel = Glib::IOChannel::create_from_win32_socket(udp.sock);
Glib::signal_io().connect( sigc::mem_fun(udp, &UDPInterface::UDPEvent), channel, Glib::IO_IN );
I've read here in some other question, and also in glib docs (g_io_channel_win32_new_socket()) that the socket is put into nonblocking mode, and it's "a side-effect of the implementation and unavoidable". Does this explain the CPU effect, it's not clear to me?
Whether or not I use glib to access the socket or call recvfrom() directly doesn't seem to make much difference, since CPU is used up before any packet arrives and the read handler gets invoked. Also glibmm docs state that it's ok to call recvfrom() even if the socket is polled (Glib::IOChannel::create_from_win32_socket())
I've tried compiling the program with -pg and created a per function cpu usage report with gprof. This wasn't usefull because the time is not spent in my program, but in some external glib/glibmm dll.

Resources