I got an Arduino Uno, which is driven by an ATmega328P. And I wanted to move away from its libraries and do everything on a lower level for learning purposes. However I cannot get the uart working correctly, it works now only when sending to the device. Receiving returns weird garbage wich the temrinal can't print.
#define BAUDRATE (((F_CPU / (BAUD * 16UL))) - 1)
void init_uart()
{
UBRR0H = BAUDRATE >> 8; // set high baud
UBRR0L = BAUDRATE; //set low baud
UCSR0B = _BV(TXEN0) | _BV(RXEN0); //enable duplex
UCSR0C = _BV(UCSZ00) | _BV(UCSZ01) | _BV(USBS0); //8-N-1
}
void putchar_uart(char c, FILE* stream)
{
loop_until_bit_is_set(UCSR0A, UDRE0); //wait till prev char is read
UDR0 = c;
}
char getchar_uart(FILE* stream)
{
loop_until_bit_is_set(UCSR0A, RXC0); //wait if there is data
return UDR0;
}
//^ actually is in a seperate file which gets linked
int main()
{
DDRD |= PIN_LED;
PORTD |= PIN_LED;
stdout = &mystdout;
stdin = &mystdin;
char buf[0xFF];
init_uart();
while (1)
{
char c = getchar_uart(NULL);
if (c == 'a')
{
PIND = PIN_LED;
printf("%s\n", "Hallo");
}
}
}
I'm running Ubuntu 14.04 LTS and using minicom for the communication. Which is setup as: 115200 8N1 (with the correct serial device of course.)
It gets compiled as:
avr-gcc -Wall -Os -mmcu=atmega328p -DF_CPU=16000000UL -DBAUD=115200 -std=c99 -L/home/joel/avr-libs/lib -I/home/joel/avr-libs/inc -o firmware.o main.c -luart
So how do I know that one way works? Because of the led only toggles when typing in an 'a'. But the response are invalid characters. In hex:
c8 e1 ec ec ef 8a
By setting the USBS bit you are commanding a second stop bit.
This appears to lead your computer to mistakenly believe that the MSB (which is the last data bit) is set when it isn't causing your received data to be OR'd with 0x80.
While this will cause a framing error, it is probably not the cause of the wrong MSB. Your own answer about switching to 2x mode (and thus more accurately approximating the baud rate) is more key to the solution, though you should correct this too.
I fixed the problem when Chris suggested to print out the config registers that Arduino uses I noticed that it uses the double mode. I couldn't configure that with minicom or I missed that. Maybe it is default to use such mode. Anyway it works now.
I also learned that avr-libc provides a header called util/setbaud.h which calculates the correct baud rate automatically. In the UBRRL_VALUE and UBRRH_VALUE fields.
Related
I'm running into some initial LIDAR connection issue with simultaneously connecting 4 Slamtec RPLIDAR A3 using MATALB
with the provided interface library found here: https://github.com/ENSTABretagneRobotics/Hardware-MATLAB
The issue is that I am having to retry the connection on at least one of the LIDARS before it connects.
And it can also vary with LIDAR that is. That is, all but one LIDAR connects the first time.
One time, it could be LIDAR on one COM port, another time it could be a LIDAR on another COM port.
This is the way it is set up right now.
Basically MATALB loads the provided interface library, hardwarex.dll. That exposes some library methods to be used by MATLAB.
The method to connect the LIDAR does the following:
Opens the RS232 port
Sets port options
Gets some info and health statuses form lidar
Sets the motor PWM to zero (stop lidar motor)
Uses express scan mode option
Here somewhere the communication will error out.
Using a serial sniffer I was able to see that the LIDAR errors out after the following message to the LIDAR:
a5 f0 02 ff 03 ab a5 25 a5 82 05 00 00 00 00 00 22
Which I tracked to the following library methods, in that order
SetMotorPWMRequestRPLIDAR()
CheckMotorControlSupportRequestRPLIDAR()
StopRequestRPLIDAR()
StartExpressScanRequestRPLIDAR() <-- Error here
To which the LIDAR responds with:
a5 5a 54 00 00 40 82
Where as a successfully connection response from the LIDAR much longer in content.
Things I've tried
Drain (force all write data) the write buffer with the interface libraries DrainComputerRS232Port() method before and/or after any write to lidar.
Setting the TX/Write OS FIFO buffer to FILE_FLAG_NO_BUFFERING (ie. WriteFile()).
Changing the Hardware FIFO buffer form max (16) to min (1).
Using MATLAB's serial() command to flush any input or output buffers prior to loading the library or trying the connections.
This is the system and settings I am working with
Lidar (x4):
Slamtec RPLIDAR A3
Firmware 1.26
Connected via USB (no USB hub used)
No other COM port devices connected
Computer
OS: Windows 10 Pro - Build 1903
CPU: Intel Xeon 3.00Ghz
RAM: 64 GB
HD: SSD - 512GB NVMe
Serial Port Settings
Boud Rate: 256000
Timeout: 1000
Software
MATLAB R2018b (9.5.0)
I've been banging my head on the wall with this. Any help is much much appreciated!
I'm going to answer my own question. And anyone is interested in a more detailed discussion please refer to the issue posted on the MATLAB RPLIDAR repo:
https://github.com/ENSTABretagneRobotics/Hardware-MATLAB/issues/2
As I mentioned, when debugging, the error seemed to happen ConnectRPLIDAR() --> StartExpressScanRequestRPLIDAR(), then specifically here:
// Receive the first data response (2 data responses needed for angles computation...).
memset(pRPLIDAR->esdata_prev, 0, sizeof(pRPLIDAR->esdata_prev));
if (ReadAllRS232Port(&pRPLIDAR->RS232Port, pRPLIDAR->esdata_prev, sizeof(pRPLIDAR->esdata_prev)) != EXIT_SUCCESS)
{
// Failure
printf("A RPLIDAR is not responding correctly. \n");
return EXIT_FAILURE;
}
What seemed to have happened before that is after the command being send out in WriteAllRS232Port(), sometimes it would not read a response in the ReadAllRS232Port(), esdata_prev would be nothing.
We tried implementing a mSleep(500) delay before that second ReadAllRS232Port(), and it seemed to help (my guess that the lidar was slow to respond), but the issue did not get resolved fully.
The following is what made it work every time with 4 lidars:
inline int StartExpressScanRequestRPLIDAR(RPLIDAR* pRPLIDAR)
{
unsigned char reqbuf[] = { START_FLAG1_RPLIDAR,EXPRESS_SCAN_REQUEST_RPLIDAR,0x05,0,0,0,0,0,0x22 };
unsigned char descbuf[7];
unsigned char sync = 0;
unsigned char ChkSum = 0;
// Send request to output/tx OS FIFO buffer for port
if (WriteAllRS232Port(&pRPLIDAR->RS232Port, reqbuf, sizeof(reqbuf)) != EXIT_SUCCESS)
{
printf("Error writing data to a RPLIDAR. \n");
return EXIT_FAILURE;
}
// Receive the response descriptor.
memset(descbuf, 0, sizeof(descbuf)); // Alocate memory
if (ReadAllRS232Port(&pRPLIDAR->RS232Port, descbuf, sizeof(descbuf)) != EXIT_SUCCESS)
{
printf("A RPLIDAR is not responding correctly. \n");
return EXIT_FAILURE;
}
// Quick check of the response descriptor.
if ((descbuf[2] != 0x54) || (descbuf[5] != 0x40) || (descbuf[6] != MEASUREMENT_CAPSULED_RESPONSE_RPLIDAR))
{
printf("A RPLIDAR is not responding correctly. \n");
return EXIT_FAILURE;
}
// Keep anticipating a port read buffer for 1 second
int timeout = 1500;
// Check it every 5 ms
// Note on Checking Period Value:
// Waiting on 82 bytes in lidar payload
// 10 bits per byte for the serial communication
// 820 bits / 256000 baud = 0.0032s = 3.2ms
int checkingperiod = 5;
RS232PORT* pRS232Port = &pRPLIDAR->RS232Port;
int i;
int count = 0;
// Wait for something to show up on the input buffer on port
if (!WaitForRS232Port(&pRPLIDAR->RS232Port, timeout, checkingperiod))
{
//Success - Something is there
// If anything is on the input buffer, wait until there is enough
count = 0;
for (i = 0; i < 50; i++)
{
// Check the input FIFO buffer on the port
GetFIFOComputerRS232Port(pRS232Port->hDev, &count);
// Check if there is enough to get a full payload read
if (count >= sizeof(pRPLIDAR->esdata_prev))
{
// Thre is enough, stop waiting
break;
}
else
{
// Not enough, wait a little
mSleep(checkingperiod);
}
}
}
else
{
//Failure - After waiting for an input buffer, it wasn't there
printf("[StartExpressScanRequestRPLIDAR] : Failed to detect response on the input FIFO buffer. \n");
return EXIT_FAILURE;
}
// Receive the first data response (2 data responses needed for angles computation...).
memset(pRPLIDAR->esdata_prev, 0, sizeof(pRPLIDAR->esdata_prev));
if (ReadAllRS232Port(&pRPLIDAR->RS232Port, pRPLIDAR->esdata_prev, sizeof(pRPLIDAR->esdata_prev)) != EXIT_SUCCESS)
{
// Failure
printf("A RPLIDAR is not responding correctly. \n");
return EXIT_FAILURE;
}
// Analyze the first data response.
sync = (pRPLIDAR->esdata_prev[0] & 0xF0)|(pRPLIDAR->esdata_prev[1]>>4);
if (sync != START_FLAG1_RPLIDAR)
{
printf("A RPLIDAR is not responding correctly : Bad sync1 or sync2. \n");
return EXIT_FAILURE;
}
ChkSum = (pRPLIDAR->esdata_prev[1]<<4)|(pRPLIDAR->esdata_prev[0] & 0x0F);
// Force ComputeChecksumRPLIDAR() to compute until the last byte...
if (ChkSum != ComputeChecksumRPLIDAR(pRPLIDAR->esdata_prev+2, sizeof(pRPLIDAR->esdata_prev)-1))
{
printf("A RPLIDAR is not responding correctly : Bad ChkSum. \n");
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
So in the above code, we are waiting for the OS read FIFO buffer to show something within 1.5s, checking every 5ms (WaitForRS232Port()). If anything shows up, makes sure to wait to have enough, the size of the payload (GetFIFOComputerRS232Port()).
I'm not sure if it made a difference but we also removed the OS write FIFO buffer by changing it from 0 to FILE_FLAG_NO_BUFFERING:
File: OSComputerRS232Port.h
...
hDev = CreateFile(
tstr,
GENERIC_READ|GENERIC_WRITE,
0, // Must be opened with exclusive-access.
NULL, // No security attributes.
OPEN_EXISTING, // Must use OPEN_EXISTING.
FILE_FLAG_NO_BUFFERING, // Not overlapped I/O. Should use FILE_FLAG_WRITE_THROUGH and maybe also FILE_FLAG_NO_BUFFERING?
NULL // hTemplate must be NULL for comm devices.
);
...
I get some stupid error's if I want to try initialise the connection from the TWI master to the bus. The start condition will be send but the processor waits in the infinity loop bevor starting to send the slave address to the bus.
I also have analysed the signals on the bus and one result is that the clock is running but there will be no data send on the bus.
The processor wait's in the line with the marked arrow.
We use the following code to start the and initialise the bus ...
void i2c_master_init() {
TWBR = (uint8_t)TWBR_val;
}
void i2c_master_stop() {
TWCR = (1<<TWINT) | (1<<TWEN) | (1<<TWSTO);
}
uint8_t i2c_master_start(uint8_t address) {
TWCR = 0;
TWCR |= (1<<TWSTA);
TWCR |= (1<<TWEN);
TWCR |= (1<<TWINT);
while( !(TWCR & (1<<TWINT)) ); <--
[...]
}
Currently I don't know, what's going wrong with the code. Or am I doing something else wrong. Can anyone help me?
Thank you in anticipation.
My best guess without hardware on my bench is that you should set all flags to TWI control registry at once by TWCR = (1<<TWINT)|(1<<TWSTA)|(1<<TWEN). Meanwhile you set them one by one in 3 separate operations (multiple clock cycles), while datasheet implicitly says flags must be set together, see also datasheet examples.
I have Raspberry PI 3 connected via SPI to AVR Attiny26, which in turn has a LCD connected to it. I am trying to get the SPI running,
Now, the issue is that when I set up the AVR for two wire mode and don't configure pull-up on PB1 (MISO commented out):
USICR = (1<<USIOIE)|(1<<USIWM1)|(1<<USICS1); // Enable USI interrupt - USIOIE=1
// Three wire mode USIWM1=0, USIWM0=1
// Two wire mode USIWM1=1, USIWM0=0
// External clock USICS1=1
//PORTB |= (1<<SPI_MISO); // Enable pull-ups on SPI port
DDRB = 0b01001010; /* Set PORTB bits: 7-4 as input
-- PB7 - Pushbutton (KEY1)/RESET
-- PB6 - Pushbutton (KEY2)/INT0
-- PB5 - ADC8 (T2)
-- PB4 - ADC7 (T1)
-- PB3 - PUMP
-- PB2 - SCK - 0 = external clock (input)
-- PB1 - MISO (output)
-- PB0 - MOSI (input) - */
ISR(USI_OVF_vect) {
disp[counter++] = USIDR;
if(counter==16)
counter = 0;
USISR |= (1<<USIOIF);
}
I get the string transfered and printed on the LCD.
However, when I change the AVR to work in three wire mode and/or enable PB1 pull-up, all I get is garbage. Neither the received characters match the ones sent, nor does their count.
Raspberry is the master here, providing all the clocking, the setup there is always the same (default, three wire mode) and the clock is reasonably slow.
int main(int argc, char **argv) {
int res = bcm2835_init();
printf("BCM2835 Init() = %d\n", res);
res = bcm2835_spi_begin();
printf("BCM2835 Begin() = %d\n", res);
bcm2835_spi_setClockDivider(BCM2835_SPI_CLOCK_DIVIDER_65536);
char data[16];
sprintf(data,"%s","<--Some data-->");
int len = strlen(data);
printf("Sent: %s\n", data);
bcm2835_spi_writenb(data, len);
exit(0);
}
Same results with spidev_test program using ioctl, so does not seem related to the library/Pi's program.
On top, what is puzzling me is that when I disconnect the wire from PB1 (MISO), I immediately start receiving garbage from Pi. As if Pi's SPI immediately starts clocking when PB1/MISO goes afloat.
What am I missing here?
Regretfully, I have to say that this one goes into the RTFM category.
After some reserch I found that Pi GPIO works with +3.3V. The Attiny was set to use+ 5V. After rewiring the AVR to work with 3.3V as well, everything seems to be working.
The reason why it worked in two wire mode is the absence of AVR's pull-up resistors (external required), which allowed Pi to use its own and drive the AVR pins in the range acceptable to both Pi and AVR. Enabling pull-ups on AVR would drive Pi's GPIO over the limit. Apparently no damage done, only unpredictable and hard to explain behavior.
My project is an audio spectrum analyzer, but I am stuck in displaying the ADC results, either on my LCD or on the Terminal of CodevisionAVR.
The project uses an ATmega16A, with an 7.37 MHz external oscillator. For an IDE I am using CodevisionAVR.
The audio spectrum analyzer takes its input through a 3.5 mm jack audio cable, this signal is amplified and filtered in order to select the frequencies between 0 and 4 KHz, and the output of this circuit is connected to PA0, which is the channel 0 of the ADC of the microcontroller.
For testing, I have set the ADC to work on 8 bits (read the most significant 8 bits), taking the internal 2.56V as voltage reference. I have decoupled AREF pin using a 10nF capacitor (I will change it to 100nF for a better noise reduction). The ADC is also in free running mode.
I am stuck in displaying the ADC results, either on my LCD or on the Terminal of CodevisionAVR (through the UART --- configured using the wizard).
This is the function I used for the ADC:
// Voltage Reference: Int., cap. on AREF
#define ADC_VREF_TYPE ((1<<REFS1) | (1<<REFS0) | (1<<ADLAR))
// Read the 8 most significant bits
// of the AD conversion result
unsigned char read_adc(unsigned char adc_input)
{
ADMUX=adc_input | ADC_VREF_TYPE;
// Delay needed for the stabilization of the ADC input voltage
delay_us(10);
// Start the AD conversion
ADCSRA|=(1<<ADSC);
// Wait for the AD conversion to complete
while ((ADCSRA & (1<<ADIF))==0);
ADCSRA|=(1<<ADIF);
return ADCH;
}
Main function of the code:
void main (void)
{
Init_Controller(); // this must be the first "init" action/call!
#asm("sei") // enable interrupts
lcd_init(16);
lcd_gotoxy(0,1);
lcd_putsf("AUDIO SPECTRUM");
delay_ms(3000);
lcd_clear();
while(TRUE)
{
wdogtrig();
TCNT1 = 0; //usage of Timer1 with OCR1A
TIFR |= 1<<OCF1A;
for(i=0;i<N;i++) {
while((TIFR & (1<<OCF1A)) == 0)
putchar(read_adc());
//adc_set[i] = adc_read(); //this is a second option
TIFR |= 1<<OCF1A;
}
//for(i=0; i<N; i++)
//printf("adc values: %d \n",adc_set[i]);
} //end while loop
}
N is defined as 32 = number of samples in 1 AD conversion.
The first error I see is using putchar() to write a number to the LCD.
The result of read_adc() is a number, not a string of ascii characters. You need to use sprintf to write the ADC result as a string into a buffer, then use lcd_putsf() to send the buffer to the LCD.
I have bumped into a bit inconsistent IRQ/ISR performance on Freescales imx.233 running linux kernel (3.8.13) with CONFIG_PREEMPT_RT patches.
I am little bit surprised why this processor (ARM9, 454mhz) is unable to keep up even with 74kHz IRQ requests.. ?
In my kernel config I have set following flags:
CONFIG_TINY_PREEMPT_RCU=y
CONFIG_PREEMPT_RCU=y
CONFIG_PREEMPT=y
CONFIG_PREEMPT_RT_BASE=y
CONFIG_HAVE_PREEMPT_LAZY=y
CONFIG_PREEMPT_LAZY=y
CONFIG_PREEMPT_RT_FULL=y
CONFIG_PREEMPT_COUNT=y
CONFIG_DEBUG_PREEMPT=y
On the system there is basically nothing running (created by buildroot), and I set PWM to generate a pulse of 74kHz, that serves as interrupt.
Then in the ISR, I just trigger another GPIO output pin, and check the output.
What I find is that sometimes I miss an interrupt -
You can see the missed interrupt here:
And also the the triggering of output pin seems to be a bit inconsistent, the output pin is triggered usually within "5% window", that might still be acceptable. But I worry, that when I start implementing data transfer logic, instead of just triggering the pin, I might run into further problems...
My simple driver code looks like this:
#needed includes
uint16_t INPUT_IRQ = 39;
uint16_t OUTPUT_GPIO = 38;
struct test_device *device;
//Prototypes
void irqtest_exit(void);
int irqtest_init(void);
void free_device(void);
//Default functions
module_init(irqtest_init);
module_exit(irqtest_exit);
//triggering flag
uint16_t pulse = 0x1;
irqreturn_t irq_handle_function(int irq, void *device_id)
{
pulse = !pulse;
gpio_set_value(OUTPUT_GPIO, pulse);
return IRQ_HANDLED;
}
struct test_device {
int huuhaa;
};
void free_device() {
if (device)
kfree(device);
}
int irqtest_init(void) {
int result = 0;
device = kmalloc(sizeof *device, GFP_KERNEL);
device->huuhaa = 10;
printk("IRB/irqtest_init: Inserting IRQ module\n");
printk("IRB/irqtest_init: Requesting GPIO (%d)\n", INPUT_IRQ);
result = gpio_request_one(INPUT_IRQ, GPIOF_IN, "PWM input");
if (result != 0) {
free_device();
printk("IRB/irqtest_init: Failed to set GPIO (%d) as input.. exiting\n", INPUT_IRQ);
return -EINVAL;
}
result = gpio_request_one(OUTPUT_GPIO, GPIOF_OUT_INIT_LOW , "IR OUTPUT");
if (result != 0) {
free_device();
printk("IRB/irqtest_init: Failed to set GPIO (%d) as output.. exiting\n", OUTPUT_GPIO);
return -EINVAL;
}
//Set our desired interrupt line as input
result = gpio_direction_input(INPUT_IRQ);
if (result != 0) {
printk("IRB/irqtest_init: Failed to set IRQ as input.. exiting\n");
free_device();
return -EINVAL;
}
//Set flags for our interrupt, guessing here..
irq_flags |= IRQF_NO_THREAD;
irq_flags |= IRQF_NOBALANCING;
irq_flags |= IRQF_TRIGGER_RISING;
irq_flags |= IRQF_NO_SOFTIRQ_CALL;
//register interrupt
result = request_irq(gpio_to_irq(INPUT_IRQ), irq_handle_function, irq_flags, "irq testing", device);
if (result != 0) {
printk("IRB/irqtest_init: Failed to reserve GPIO 38\n");
return -EINVAL;
}
printk("IRB/irqtest_init: insert success\n");
return 0;
}
void irqtest_exit(void) {
if (device)
kfree(device);
gpio_free(INPUT_IRQ);
gpio_free(OUTPUT_GPIO);
printk("IRB/irqtest_exit: Removing irqtest module\n");
}
int irqtest_open(struct inode *inode, struct file *filp) {return 0;}
int irqtest_release(struct inode *inode, struct file *filp) {return 0;}
In the system, I have following interrupts registered, after the driver is loaded:
# cat /proc/interrupts
CPU0
16: 36379 - MXS Timer Tick
17: 0 - mxs-spi
18: 2103 - mxs-dma
60: 0 gpio-mxs irq testing
118: 0 - mxs-spi
119: 0 - mxs-dma
120: 0 - RTC alarm
124: 0 - 8006c000.serial
127: 68050 - uart-pl011
128: 151 - ci13xxx_imx
Err: 0
I wonder if the flags I declare to my IRQ are good ? I noticed that with this configuration, I can no longer reach console, so kernel seems totally consumed with servicing this 74kHz trigger now.. this can't be right ?
I suppose it's not a big deal for me since this is only during data transfer, but still I feel I'm doing something wrong..
Also, I wonder if it would be more efficient to map the registers with ioremap, and trigger the output with direct memory writes ?
Is there some way I could increase the priority of the interrupt even higher ? Or could I somehow lock the kernel for the duration of the data transfer (~400ms), and generate somehow else my timing for the output ?
Edit: Forgot to add /proc/interrupts output to the question...
What you experience here is interrupt jitter. This is to be expected on Linux, because the kernel regularly disables the interrupts for various tasks (entering a spinlock, handling an interrupt, etc.).
This will happen, regardless wether you have PREEMPT_RT or not, so expecting to generate 74kHz signal with regular interrupts is pretty much unrealistic.
Now, ARM has higher priority interrupts called FIQs, that will never be masked or disabled.
Linux doesn't use FIQ, and is not built to deal with the fact that an FIQ could be used, so you won't be able to use the generic kernel framework.
From Linux driver development point of view however, it's not really different as long as you keep this in mind: you have to write a handler, and associate it to an IRQ. You'll also have to poke into the interrupt controller to make it generate a FIQ for the interrupt you want to use (the details on how to change it are platform-dependant. Some platforms have functions to do that (like imx25 and mxc_set_irq_fiq), some others don't. imx23/28 don't, so you'll have to do it by hand).
The only thing that the functions to setup a fiq handler only work with a assembly-written handler, so you'll have to rewrite your handler in assembly (with your current code, it should be trivial though).
You can grab additional details to the blog post Alexandre posted (http://free-electrons.com/blog/fiq-handlers-in-the-arm-linux-kernel/), where you'll find working code, samples, and explanations on how it all works together.
You can have a look at what my colleague Maxime Ripard did using an FIQ on a similar SoC (i.mx28) :
http://free-electrons.com/blog/fiq-handlers-in-the-arm-linux-kernel/
Try this flags:
int irq_flags;
...
irq_flags = IRQF_TRIGGER_RISING | IRQF_EARLY_RESUME
I had a kernel 3.8.11 and can't find IRQF_NO_SOFTIRQ_CALL define. It's only for 3.8.13?
Also I didn't see irq_flags define. Where is it?