RPi4 : Device driver - microsecond delay/count [duplicate] - linux-kernel

This question already has answers here:
How to place microsecond delay in kernel?
(2 answers)
Closed 1 year ago.
I am beginner in Linux device driver. I am trying to implement a device driver for ultrasonic sensor (HC-SR04) using Raspberry Pi 4B with following details:
OS : Raspberry Pi OS ( Raspbian GNU/Linux 10 buster ) ( version = 10 buster )
Linux Version : Linux raspberrypi 5.10.20-v7l+ armv7l GNU/Linux
The problem now am facing is to create a microsecond delay for the trigger of sensor and also need to find the duration in microseconds of the ECHO signal. I tried with usleep() but it gives me implicit error which I couldn't solve even after trying the methods (like adding #define _BSD_SOURCE) explained in other forums. On searching it was found that jiffies are not able to give microsecond precision. The clock() function or sched_clock() both kept on giving errors.
It would be really helpful if any of you could suggest me a way to implement microsecond delay and count in device driver.
Thank You in advance.
[SOLVED]
udelay( ) solves the issue of microsecond delay with header linux/delay.h .
Another issue was of finding duration of a process in microseconds, which was solved by the following codes
volatile unsigned int data = 0;
volatile ktime_t start;
start = ktime_get();
// process to check
data= (unsigned int) ktime_to_ns( ktime_sub( ktime_get(), start ) );
data /= 1000; // convert nanaosecond to microsecond

udelay() should be the function you are looking for.
usleep() is a function from the standard library and therefore not usable in kernel space. You will have to use linux timers. Refer to the documentation for more information.

Related

ATmega16A SMD Package Stop Working after afew hours

نتایج ترجمه
Hello
I wrote a program for ATmega16A microcontroller, 6 of the pins are defined as input and 6 of the pins are defined as output.
UART communication and a timer are also defined in the program.
There is no problem when I program the software on the DIP package.
But on the SMD package, the micro stops working after a few hours and does not work even after resetting.
I wrote the program in C
and I programmed with avrdude.
Hex file size: 8.5 KB
Is it possible that the problem is with my codes?

SWV in STM32F302 - printf() with different characters

I found some answers that didn't solve my issue for STM32F302.
I configured the debug run as follows, to printf() in the SWV ITM Data Console:
IMG-Debug_Config
I implemented the _write function as follows:
int _write(int file, char *ptr, int len)
{
int DataIdx;
for (DataIdx = 0; DataIdx < len; DataIdx++)
{
ITM_SendChar((*ptr++));
}
return len;
}
And tried to setup the sys clock for "Asynchronous Trace" and "Serial Wire", none worked and I keep getting the same output (SWV Graph does not work either):
IMG-SWV_Output
Any suggestion about this issue? I just want to debug the variable to make sure I'm getting the correct measurement.
PS. Just a brief of my project: An ADC for a light sensor. I need to generate a graph from a laser sample measurement. Make this measurement with the STM32 and a photodiode, finish the measurement and send the .csv or .txt from USB to a computer to analyse the data.
I found what my problem was:
My "Core Clock (MHz)", in the debug settings, was wrong and that's why my SWV was not working properly
If no SWV data - you need to connect the SWO pin to ST-LINKV2. The SWV data transmission is on the SWO pin. On my STM32F3DISCOVERY, the SB10 was not soldered, SB10 connecting the PB3 SWO pin to the T_SWO debugger net. After soldering SB10 SWV work perfectly.

How to use baud rate 4mb with FTDI on OSX?

Is it possible to use a baud rate of 4mb (B4000000) with Apple's FTDI driver? Or is FTDI's VCP driver better?
Speeds up to B230400 are defined in termios.h (*), with each speed being defined as the integer matching its speed (unlike Linux). However, the simple hack #define B4000000 4000000 does not work.
I remember being able to use 4mb with FTDI around 5 years ago (before Apple provided a driver) using FTDI's VCP driver and hacking FTDIUSBSerialDriver.kext/Contents/Info.plist to make one of the "allowed" speeds (like B2400) be aliased to 4mb by the driver. Is this still the recommended method? (I suppose this would require disabling kext security, with sudo nvram kext-dev-mode=1.)
I am using the baud rate like this, where serBaudRate is something like B2400 or (ideally) B4000000.
if ((fd = open(serPortName, O_RDWR)) < 0) { perror("aborting"); return; }
tcgetattr(fd, &tty); /* get attributes */
cfsetospeed(&tty, (speed_t) serBaudRate); /* output speed */
cfsetispeed(&tty, (speed_t) serBaudRate); /* input speed */
tcsetattr(fd, TCSANOW, &tty); /* set attributes */
(*) the full path is /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/usr/include/sys/termios.h
I remember being able to use 4mb with FTDI around 5 years ago (before
Apple provided a driver) using FTDI's VCP driver and hacking
FTDIUSBSerialDriver.kext/Contents/Info.plist to make one of the
"allowed" speeds (like B2400) be aliased to 4mb by the driver. Is this
still the recommended method? (I suppose this would require disabling
kext security, with sudo nvram kext-dev-mode=1.)
I suspect this is the only way to do it. The Apple driver is pretty basic (as of a few years ago it didn't even support CTS/RTS signals) and I don't see any way to specify a non-standard baud rate. Looks like the FTDI Info.plist still supports those baud rate config options too.

Control GPIO without Device Tree configure on iMX.6

On old iMX.6 BSP without DT (Device Tree), GPIO is controlled by following code:
#define SABRESD_SHUTDOWN IMX_GPIO_NR(4, 15)
gpio_request(SABRESD_SHUTDOWN, "shutdown");
gpio_direction_output(SABRESD_SHUTDOWN, 1);
gpio_set_value(SABRESD_SHUTDOWN, 0);
gpio_free(SABRESD_SHUTDOWN);
However on new BSP, I cannot use IMX_GPIO_NR anymore. Instead, of_get_named_gpio provides access to GPIO defined in DT. But it is a little complicated because our product never changes the GPIO ports.
My question is, is it possible to control GPIOs without DT definition (just using the old method)?
First of all, if you are using newer kernel, I would recommend you to port your code to support the latest features. Otherwise - why bothering upgrading the kernel if you are not willing to adapt to it?
Second, never say never.
And finally:
#define IMX_GPIO_NR(bank, nr) (((bank) - 1) * 32 + (nr))

a simple real-time-clock driver program not getting output

I wrote a driver program which will interact with RTC and gives the time.
program is:
outb(GET_HR, CMD_REG);
hrs = inb(STAT_REG);
outb(GET_MIN, CMD_REG);
min = inb(STAT_REG);
pr_info("time: hrs:min\n", hrs, min);
Its working, but giving in format of GMT. I want my local time(GMT+5.30). I explicitly added 5:30 in the program. some times its not giving correct time. Is there any implicit function to get local time?
it is not task of the kernel to do time conversions. You should always work with UTC times in kernel and translate them in userspace to localtime.

Resources