How can I interface graphical lcd with raspberry pi with yocto build image - raspberry-pi3

I am newbie to yocto world, learning and trying to develop small projects.
I want to interface graphical lcd with raspberry pi 3, what interface should I use?
I have build a yocto image for raspberry pi and running on it.
What changes do I need to make in yocto image to display something on glcd.

To begin with, you will have to read the specification of the lcd you have chosen. This way you will be able to see if it is LVDS, RGB, etc as well as calculate the timings according to your requirements so that the display works as expected. Here is some information.
Once you have that background you need to know about how Linux displays images, I leave you another resource here. Basically you have to add the previously collected information to the kernel driver that manages the KMS interface (Kernel Mode Setting, part of the Direct Rendering Manager (DRM) interface.
It will also be important that you supply the kernel with the desired display resolution, among other things, in the kernel argument line.

Related

Accessing GPIO Pins on Linux Embedded Running on a Raspberry Pi 3

I am trying to currently make a c program that will make a light blink on a Raspberry Pi 3 with embedded linux installed. I am currently building the image for the OS using yocto, poky with the raspberry pi 3 and open embedded. The OS installs on the SD card and I can have managed to add the layer that runs applications and have made a simple hello world. I am now trying to access the GPIO pins but having trouble with this.
There are lot of resources that talk about blink LED on Raspberry.
Check here, here, here, or here
Your question is related to Yocto, because once you constructed the image and boot it correctly, you just need to know how to develop a userspace example to control your desired GPIO.
NOTES:
You may encounter that GPIO is not exported in userspace, so check this answer.

Flashing ESP32's memory without installing the whole IDF?

Problem
I'm looking for a way to flash an ESP32 module's memory without installing the whole IDF software suite.
Why
Because I want to integrate ESP32 onto a custom board along with a low-performance ARM-powered CPU which runs a tiny Linux distro (based on Debian), and I want to flash ESP32 from this tiny Linux distro.
I know I could use the bootloader, but who will upload the initial bootloader? I don't want to do extra steps, so my idea is to embed the ESP32 module onto my custom board, and let the Linux to flash it from factory-state (when it's flash is empty, ie. no preloaded bootloader). Or is the serial bootloader always preinstalled on all ESP32 modules (like on ESP-WROOM-32)?
Why I don't want to use IDF? Because I don't want to build or debug anything, I just want to flash myprogram.bin onto ESP32. Also, as the board is low-performance, it would take ages to download everything for running IDF.
Current state
The ESP32 module is now visible via UART (RX,TX,GND), and if I held low the GPIO0, it runs the bootloader (my current module is embedded onto a NodeMCU - but there is no USB connected, this is raw UART!):
rst:0x1 (POWERON_RESET),boot:0x3 (DOWNLOAD_BOOT(UART0/UART1/SDIO_REI_REO_V2))
waiting for download
Could I expect the same behavior (controlling GPIO0 for running the bootloader) for all ESP32 modules, or this works just because guys at NodeMCU preprogrammed already some bootloader onto it?
I'm looking for a way to flash this ESP32 preferrably without any python script.
The ESP32 has a first-stage bootloader in ROM capable of writing to Flash - that's what's printing your output. You can talk to it if you know the protocol - this is implemented by the Python scripts in ESP IDF. If you don't want to use the official implementation because it's too heavy, you'll have to write your own implementation of this protocol which scratches your specific itch. Fortunately it's more or less documented and you can likely reverse engineer any missing knowledge from official Python scripts.
Actually Espressif also provides a nice and small binary for flashing ESPs:
https://github.com/espressif/esp-serial-flasher
Serial flasher component provides portable library for flashing Espressif SoCs (ESP32, ESP32-S2, ESP8266) from other host microcontroller. Espressif SoCs are normally programmed via serial interface (UART). Port layer for given host microcontroller has to be implemented, if not available.
One more (but very important) addition:
You have to modify this repo to make it work correctly, and also you might have to upload not just your binary, but also bootloader and partition_table.

controlling hub power from linux shell

I am using a buildroot image (3.12 kernel) running on my raspberry Pi with a USB LED light connected to it and I want to control on/off through the CLI.
I went through this. However, there is no control or level file in the power folder.
Is there any kernel configuration that I have to enable to get this ?
Found the answer to this. I have to enable PM_SUSPEND in the kernel configuration to get the class files. But then, as mentioned in the comments, RaspberryPi has the power lines directly connected to the power rails

View linux kernel drivers built into the kernel, and how do they get binded/mounted/started

I'm having a bit of a hard time fully understanding how the kernel starts in linux. I'm a wince developer and our company decided to run with linux instead now.
We outsourced all of the board bringup and the package I recieved is quit a bit different for the prototype board we have compared to the nitrogen6x we have been using.
Before i start listing the differences for the distro we created, the kernels are identical. The distro we have been using is a busybox system. The one we recieved from the vendor is sysvinit. I removed mdev from busybox and we are only using udev.
when I use the kernel on our busybox build the touch screen drivers breaks, or doesn' run, or does something totally magical. I'm not quit sure... there is a /dev/input/event0 device which when run on the sysvinit side is a touch device.. Is the kernel not the mechanism that binds the built-in drivers to a device node? I thought udev was for more dynamic events in the system.
On the other hand I can't really tell whats been loaded on my device. Is there a way to list running drivers that were built into the kernel? my touch pad is up? This is a fairly simple process of looking at the registry on wince to see which devices were loaded.
I guess what I'm really trying to discover, isn't so much how to add a driver to the kernel, its how the whole thing gets is plumbed together. I've found plenty of documents on createing kernel modules, but i haven't found a good resource on how to pull everything together from scratch so you can actually use said modules. Going back to the example of the touchscreen driver, its built into the kernel, how does that get plugged into /dev/input/event0??
I'm kind of having a difficult time finding good resources mostly because searching google for varations of linux/drivers/device nodes/ piles in tons of random crap from everywhere.
What you probably want to use now is evtest. It will allow you to know what are the input devices that are present and ready to use on your system.
To get more information on the input subsystem and more generic information on how the kernel is working, I can direct you to our training materials. The materials are free to download, use and redistribute.
The general answer is, there is no single, easy place to look to discover what drivers have been loaded by the kernel if they are compiled in. Of course, lsmod will display any drivers that were dynamically loaded after kernel boot.
The kernel does not create device nodes. That is, to quote your question, the kernel does not "bind" the driver to the device node. The association between kernel driver and device node is contained in the major and minor numbers registered when the driver is initialized. You can have a device node on your file system for which there is no corresponding driver (common especially in older devices where device nodes were statically created on the file system) and you can also have a driver installed for which there is no device node.
Modern Linux distros have dynamically created device nodes created on a mount point called /dev and this is usually a tmpfs file system, meaning it is volatile - it gets destroyed on every boot and recreated dynamically on each new boot.
udev is the magic that creates most device nodes based on events that it receives from the kernel when a new device is discovered (this can be after boot on device plugin, like a USB disk) or on startup when udev reads the queued events and acts on them. As you noted, busybox has a limited udev implementation called mdev.
Study udev and you will get a much better understanding of the process. Hope this helps a little.

Create virtual hardware, kernel, qemu for Android Emulator in order to produce OpenGL graphics

I am new to android and wish to play around with the emulator.
What I want to do is to create my own piece of virtual hardware that can collect OpenGL commands and produce OpenGL graphics.
I have been told that in order to do this I will need to write a linux kernal driver to enable communication with the hardware. Additionally, I will need to write an Android user space library to call the kernal driver.
To start with I plan on making a very simple piece of hardware that only does, say 1 or 2, commands.
Has anyone here done something like this? If so, do you have any tips or possible links to extra information?
Any feedback would be appreciated.
Writing a hardware emulation is a tricky task and by no means easy. So if you really want to do this, I'd not start from scratch. In your case I'd first start with some simpler (because many of the libraries are already in place on guest and the host side): Implementing a OpenGL passthrough for ordinary Linux through qemu. What does it take:
First you add some virtual GPU into qemu, which also involves adding a new graphics output module that uses OpenGL (so far qemu uses SDL). Next you create DRI/DRM drivers in the Linux kernel, that will run on the guest (Android uses its own graphics system, but for learning DRI/DRM are fine), as well as in Mesa. On the host side you must translate what comes from qemu in OpenGL calls. Since the host side GPU is doing all the hard work your DRI/DRM part will be quite minimal and just build a brigde.
The emulator that comes with Android SDK 23 already runs OpenGL, you can try this out with the official MoreTeapots example: https://github.com/googlesamples/android-ndk/tree/a5fdebebdb27ea29cb8a96e08e1ed8c796fa52db/MoreTeapots
I am pretty sure that it is hardware accelerated, since all those polygons are rendering at 60 FPS.
The AVD creation GUI from Studio has a hardware acceleration option, which should control options like:
==> config.ini <==
hw.gpu.enabled=yes
hw.gpu.mode=auto
==> hardware-qemu.ini <==
hw.gpu.enabled = true
hw.gpu.mode = host
hw.gpu.blacklisted = no
in ~/.android/avd/Nexus_One_API_24.a/.

Resources