I'm developing an EGT embedded Linux application on a Microchip SAM Xplained Board. EGT is primarily C++ based, similar in some respects to Qt. The application I'm building naturally contains the GUI element & the interaction with hardware connected to the board.
For speed & convenience I'd like to develop as much as possible of the GUI on a desktop (EGT will run on a desktop Linux machine), however I'm going to run into issues when hardware interaction occurs (e.g. calls to GPIO pins etc.)
Is there a gcc compile time option to somehow block/redirect/overwrite these hardware interactions to something that would allow the application to run on a desktop? If not I think I'm looking at lots of #if arch = 'ARM' or something similar.
Thanks for looking!
Regards,
For anyone looking at this it seems that the way to go is some type of wrapper around hardware calls (examples below) as suggested by #sawdust or using a QEMU (which can be built using Yocto)
// Enable compiling on desktop
#if defined(__x86_64__) || defined(_M_X64)
#pragma GCC diagnostic ignored "-Wunused-value"
#define GPIO_CHIP_GET_LINE(chip, offset) (NULL)
#define GPIO_CHIP_OPEN_BY_NAME(port, pin) (1)
#define GPIO_LINE_REQUEST_OUTPUT(line, consumer, default_val) (1)
#define GPIO_LINE_REQUEST_INPUT(line, consumer) (1)
#define GPIO_LINE_SET_VALUE(line, value) (1)
#define GPIO_LINE_GET_VALUE(line) (1)
#else
#define GPIO_CHIP_GET_LINE(chip, offset) gpiod_chip_get_line(chip, offset)
#define GPIO_CHIP_OPEN_BY_NAME(name) gpiod_chip_open_by_name(name)
#define GPIO_LINE_REQUEST_OUTPUT(line, consumer, default_val) gpiod_line_request_output(line, consumer, default_val)
#define GPIO_LINE_REQUEST_INPUT(line, consumer) gpiod_line_request_input(line, consumer)
#define GPIO_LINE_SET_VALUE(line, value) gpiod_line_set_value(line, value)
#define GPIO_LINE_GET_VALUE(line) gpiod_line_get_value(line)
#endif
Related
I am trying to port the fantastic ASUS XONAR-series driver for Linux, written by Clemens Ladisch, to Mac OSX.
Right now, a very rough version that compiles is available at: github.com/i3roly/CMI8788
My question is regarding the pthread.h header for OSX. By default, including pthread.h tries to define a structure that is markedly different from the one included through the IOKit drivers. for brevity i will use an informative post from a github post(https://github.com/civetweb/civetweb/issues/364#issuecomment-255438891):
#include <pthread.h>
#include <sys/_types/_mach_port_t.h>
typedef __darwin_mach_port_t mach_port_t;
versus
#include <IOKit/audio/IOAudioDevice.h>
#include <IOKit/IOService.h>
#include <IOKit/IORegistryEntry.h>
#include <IOKit/IOTypes.h>
#include <IOKit/system.h>
#include <mach/mach_types.h>
#include <mach/host_info.h>
#include <mach/message.h>
#include <mach/port.h>
/*
* For kernel code that resides outside of Mach proper, we opaque the
* port structure definition.
*/
struct ipc_port;
typedef struct ipc_port *ipc_port_t;
#define IPC_PORT_NULL ((ipc_port_t) 0UL)
#define IPC_PORT_DEAD ((ipc_port_t)~0UL)
#define IPC_PORT_VALID(port) \
((port) != IPC_PORT_NULL && (port) != IPC_PORT_DEAD)
typedef ipc_port_t mach_port_t;
now, i can get around this by doing
#define _MACH_PORT_T
#include <pthread.h>
but i am not sure if this is a safe solution, since to me it seems the pthreads API for Xcode implies it is only to be used for user-land programs. is this assumption wrong? is using this macro to get around the redefinition problem a reasonable one?
have others tried to write kernel land drivers for OSX using pthreads, and encountered this issue? any insight would be appreciated.
thank you.
stupid question.
i don't know why i didn't remind myself that you CANNOT USE PTHREADS IN KERNEL, especially when i have experience building the linux kernel (which should have served as an easy reminder that YOU CANNOT DO THIS AND IT IS A BAD IDEA)
hits self over the head with a slipper
i have no idea why this didn't click yesterday.
I recently bought a NodeMCU ESP8266 and started playing with it. Even though almost all the scripts I've written for Arduino micro controllers work fine on ESP8266, there are some differences. For example, reading from the EEPROM or using the internal VREF in my Esp8266.
I know that one can identify which Arduino board is connected using the following code:
#if defined(__AVR_ATmega1280__) || defined(__AVR_ATmega2560__)
//Code in here will only be compiled if an Arduino Mega is used.
#elseif defined(__AVR_ATmega328P__) || defined(__AVR_ATmega168__)
//Code in here will only be compiled if an Arduino Uno (or older) is used.
#elseif defined(__AVR_ATmega32U4__) || defined(__AVR_ATmega16U4__)
//Code in here will only be compiled if an Arduino Leonardo is used.
#endif
However, this works for arduino micro controllers. How would I be able to do it for an esp8266 microcontroller?
Like you mentioned in your question, the #if defined(__xxxxx__) statements are not actually running on the microcontroller. They're preprocessor directives. The if statements decide which code to pass to the actual compiler, and which ones to omit.
What you can do is write your code to read from the eeprom, but for the section of code that differs between microcontrollers (I'd recommend a separate function for each) you can choose between at compile time.
For example
#ifdef AVR_MICROCONTROLLER
read_from_eeprom(...)
{
code for the avr chip
}
#else // I'm assuming there's no other options besides avr and esp
read_from_eeprom(...)
{
code for esp chip
}
#endif
Then when compiling, use a -D flag to specify that you are using AVR
or omit the tag for esp.
gcc ... -D AVR_MICROCONTROLLER ...
I sense the reason you asked this question might stem from confusion about where the __AVR_ATmega1280__, etc... tags come from.
Basically, they aren't keywords used by the compiler to decide which chip to compile for. They're created by the person(s) who wrote the source file, and they're used for portability so the same file can be used with many different platforms/processors.
In my answer I used a command line tag to define the AVR_MICROCONTROLLER tag.
Other projects (E.g. Marlin firmware running on arduinos) also have config files full of define statements that can be used to configure exactly how the code is compiled. Long story short, yes the same can be done for other microcontrollers, and you would do it by writing your own preprocessor if statements and then choosing which parameters/variables to set at compile time depending on the chip you want to run the code on.
#if defined(__AVR_ATmega1280__) || defined(__AVR_ATmega2560__)
//Code in here will only be compiled if an Arduino Mega is used.
Is it possible to do the same for other types of microcontrollers? For example the one in ESP8266? What should I look for?
The mentioned macros are built-in macros provided by avr-gcc. They are used to determine for which device to compile, for example by avr-libc. Actually, these macros are no more built-in in the compiler / preprocessor today but provided by the device-specs file device-specs/specs-<device> that injects respective -D__AVR_<DEVICE>__ to the preprocessor's command line according to -mmcu=<device>.
What you can use for AVR is
#ifdef __AVR__ when compiling for AVR and, which is still a built-in macro from avr-gcc / avr-g++.
ESP8266 is a completely different architecture; you would use xtensa-g++ to compile code for that µC and that incarnation of GCC built-in defines __xtensa__ and __XTENSA__ (and definitely not __AVR__).
However, whereas device support in the AVR tools is very sophisticated and hundreds of different -mmcu=<device> options are recognized by avr-gcc, this is not the case for xtensa. You will have to define your own macros if you want to distinguish for different xtensa derivatives.
As avr and xtensa are very different architectures, you can also put the architecture specific stuff in own modules, like eeprom-avr.cpp that provides read_from_eeprom or whatever for avr and is only included in the build when building for avr with avr-g++, and a similar xtensa-only module eeprom-xtensa.cpp that's only included when built with xtensa-g++.
How to programmarically identify if the code is being compiled for AVR or ESP8266?
#if defined (__AVR__)
/* Code for AVR. */
#elif defined (__XTENSA__)
/* Code for ESP8266. */
#else
#error Compiling for unsupported target.
#endif
On old iMX.6 BSP without DT (Device Tree), GPIO is controlled by following code:
#define SABRESD_SHUTDOWN IMX_GPIO_NR(4, 15)
gpio_request(SABRESD_SHUTDOWN, "shutdown");
gpio_direction_output(SABRESD_SHUTDOWN, 1);
gpio_set_value(SABRESD_SHUTDOWN, 0);
gpio_free(SABRESD_SHUTDOWN);
However on new BSP, I cannot use IMX_GPIO_NR anymore. Instead, of_get_named_gpio provides access to GPIO defined in DT. But it is a little complicated because our product never changes the GPIO ports.
My question is, is it possible to control GPIOs without DT definition (just using the old method)?
First of all, if you are using newer kernel, I would recommend you to port your code to support the latest features. Otherwise - why bothering upgrading the kernel if you are not willing to adapt to it?
Second, never say never.
And finally:
#define IMX_GPIO_NR(bank, nr) (((bank) - 1) * 32 + (nr))
I am trying to write a kernel module that takes control of the UART1 RX&TX pins a runtime, changes their mode to GPIO, sends some commands (using bitbanging) and changes their mode back to UART.
Now, is there a way to change pin modes at runtime in a kernel module on the beaglebone black? I tried accessing the CONTROL_MODULE directly, which did not return an error, however, nothing seems to be written out.
#define CONTROL_MODULE_START 0x44E10000 // CONTROL_MODULE starting address in memory
#define CONTROL_MODULE_END 0x44E11FFF // CONTROL_MODULE end address in memory
#define CONTROL_MODULE_SIZE (CONTROL_MODULE_END - CONTROL_MODULE_START)
#define GPIO1_17_OFFSET 0x844 // control offset for GPIO1_17
#define GPIO3_19_OFFSET 0x9a4 // control offset for GPIO3_19
.
.
.
if (!(control_module = ioremap(CONTROL_MODULE_START, CONTROL_MODULE_SIZE))) {
printk(KERN_ERR "UARTbitbangModule: unable to map control module\n");
return -1;
}
// set both GPIOs to mode 7, input enabled
value = 0x7;
iowrite32(value, control_module + GPIO1_17_OFFSET);
iowrite32(value, control_module + GPIO3_19_OFFSET);
printk(KERN_INFO "UARTbitbangModule: mode GPIO1_17: %d\n", control_module[GPIO1_17_OFFSET]);
printk(KERN_INFO "UARTbitbangModule: mode GPIO3_19: %d\n", control_module[GPIO3_19_OFFSET]);
the corresponding dmesg output looks like this:
[22637.953610] UARTbitbangModule: mode GPIO1_17: 0
[22637.960000] UARTbitbangModule: mode GPIO3_19: 0
I also thought about using the pinctrl subsystem directly (see https://www.kernel.org/doc/Documentation/pinctrl.txt), but I cannot make sense of how to interact with it.
Any ideas on how to change pin modes on the bone at runtime or gain write access to the control module?
Edit: I am using a slightly tweaked (better rt performance) 4.1.15-bone-rt-r17 kernel with a BeagleBoard.org Debian Image 2015-03-01
You can use "linux/gpio.h" header file. An example code from Derek Molloy is here. This code is simple and gpio_request and gpio_direction_input or gpio_direction_output commands do what you need and you can change pin direction without directly changing CONTROL_MODULE register.
Regards
who can explain what does the following code means?
if __KERNEL__ is not defined, define following macros.
when and where define __KERNEL__ ?
/* only for userspace compatibility */
#ifndef __KERNEL__
/* IP6 Hooks */
/* After promisc drops, checksum checks. */
#define NF_IP6_PRE_ROUTING 0
/* If the packet is destined for this box. */
#define NF_IP6_LOCAL_IN 1
/* If the packet is destined for another interface. */
#define NF_IP6_FORWARD 2
/* Packets coming from a local process. */
#define NF_IP6_LOCAL_OUT 3
/* Packets about to hit the wire. */
#define NF_IP6_POST_ROUTING 4
#define NF_IP6_NUMHOOKS 5
#endif /* ! __KERNEL__ */
When you compile your kernel, __KERNEL__ is defined on the command line.
User-space programs need access to the kernel headers, but some of the info in kernel headers is intended only for the kernel. Wrapping some statements in an #ifdef __KERNEL__/#endif block ensures that user-space programs don't see those statements.
I used Google to search for __KERNEL__ and found this.
The __KERNEL__ macro is defined because there is programs (like libraries) than include kernel code and there is many things that you don't want them to include. So most modules will want the __KERNEL__ macro to be enabled.
The same code is used in userspace iptables application (and possibly glibc and others), hence there is a protection for non-kernel code.