I am beginner to Device Tree.
I know that after a few architecture-specific initialization, start_kernel function will be called.
Could someone provide some material on how dtb is parsed..?
First fdt function called..?
Below video tutorial from bootlin is an excellent starting point to understand device tree.
https://www.youtube.com/watch?v=m_NyYEBxfn8
Related
I’m pretty new to coding with vhdl and i just finished making a simple game using a pretty rough vga driver that i made. The last thing now that i need to do is hook up a joystick to be able to control the object in the game( this game is a mini project so i have to present it and using the onboard switches wouldn’t cut it). The problem is that the joystick gives an analog input and i don’t know how to include that in my vhdl program or if its even possible. I’m using a de-10 lite board. I’m sorry if my question is messy and i hope I made it clear for you. Thx in advance.
DE10-Lite is built with MAX 10 fpga which has two on-chip ADCs, and the board has analog buffers to scale 5v analog inputs down to acceptable voltage of 2.5v.
You'll need to instantiate "Modular ADC core" and PLL to clock it.
Depending on your project needs you can instantiate just the ADC control core (it has simple streaming interface), or "standard sequencer with avalon-mm sample storage".
Check with the board's manuals to find which pins are connected to banks with ADC.
Apparently, there's an example project for ADC included with "CD-ROM" that you can download from Terasic site.
OK, I have the problem, I do not know exactly the correct terms in order to find what I am looking for on google. So I hope someone here can help me out.
When developing real time programs on embedded devices you might have to iterate a few hundred or thousand times until you get the desired result. When using e.g. ARM devices you wear out the internal flash quite quickly. So typically you develop your programs to reside in the RAM of the device and all is ok. This is done using GCC's functionality to split the code in various sections.
Unfortunately, the RAM of most devices is much smaller than the flash. So at one point in time, your program gets too big to fit in RAM with all variables etc. (You choose the size of the device such that one assumes it will fit the whole code in flash later.)
Classical shared objects do not work as there is nothing like a dynamical linker in my environment. There is no OS or such.
My idea was the following: For the controller it is no problem to execute code from both RAM and flash. When compiling with the correct attributes for the functions this is also no big problem for the compiler to put part of the program in RAM and part in flash.
When I have some functionality running successfully I create a library and put this in the flash. The main development is done in the 'volatile' part of the development in RAM. So the flash gets preserved.
The problem here is: I need to make sure, that the library always gets linked to the exact same location as long as I do not reflash. So a single function must always be on the same address in flash for each compile cycle. When something in the flash is missing it must be placed in RAM or a lining error must be thrown.
I thought about putting together a real library and linking against that. Here I am a bit lost. I need to tell GCC/LD to link against a prelinked file (and create such a prelinked file).
It should be possible to put all the library objects together and link this together in the flash. Then the addresses could be extracted and the main program (for use in RAM) can link against it. But: How to do these steps?
In the internet there is the term prelink as well as a matching program for linux. This is intended to speed up the loading times. I do not know if this program might help me out as a side effect. I doubt it but I do not understand the internals of its work.
Do you have a good idea how to reach the goal?
You are solving a non-problem. Embedded flash usually has a MINIMUM write cycle of 10,000. So even if you flash it 20 times a day, it will last a year and half. An St-Nucleo is $13. So that's less than 3 pennies a day :-). The TYPICAL write cycle is even longer, at about 100,000. It will be a long time before you wear them out.
Now if you are using them for dynamic storage, that might be a concern, depending on the usage patterns.
But to answer your questions, you can build your code into a library .a file easily enough. However, GCC does not guarantee that it links the object code in any order, as it depends on optimization level. Furthermore, only functions that are referenced in a library file is pulled in, so if your function calls change, it may pull in more or less library functions.
I am new to VHDL and FPGA. I have a Cyclone 2, DE 1 board. I am trying to program in VHDL such that it produces an animation of something (Say an algorithm). I have worked on the board and played with switches. Now, the biggest challenge for me is to get the display. For simple programs, I load the .sof file and directly manipulate the switches. Now, I downloaded a VHDL code that draws a rectangle to understand VGA and compiled it. When I load the .sof files, it loads but I do not see anything on the screen. My question is, Should VGA involved files be loaded/run in any different manner? I see that lots of material is available for xilinx but not for cyclone 2. Can anyone help me as to how the VGA works with respect to coding, compiling and running? I know the theory, need some basic practical knowledge.
All you need is to write a VGA driver. I learned it on this site. The example is pretty suitable for one who isn't familiar with VGA. You can download example code as well. Pay attention to the timing specifications for various VGA modes at the bottom of that page.
Also this teaches how to write a Pong game. Have fun with it:).
I'm interested in starting a hobbyist project, where I do some image processing by interfacing HW and SW. I am quite a newbie to this. I know how to do some basic image processing in Matlab using the existing image processing commands.
I personally enjoy working with HW and wanted to a combination of HW/SW to be able to do this. I've read articles on people using FPGAs and just basic FPGAs/micro-controllers to go about doing this.
Here is my question: can someone recommend languages I should consider that will help me with interfacing on a PC? I image, the SW part would essentially be a GUI and is place-holder for all the processing that is done on the HW. Also in-terms of selecting the HW and realistically considering what I could do on the HW, could I get a few recommendations on that too?
Any recommendations will be appreciated!
EDIT: I read a few of the other posts saying requirements are directly related to knowing what kind of image processing one is doing. Well initially, I want to do finger print recognition. So filtering and locating unique markers in the image etc.
It all depends on what you are familiar with, how you plan on doing the interface between FPGA and PC, and generally the scale of what you want to do. Examples could be:
A fast system could for instance consist of a Xilinx SP605
board, using the PCI Express interface to quickly transfer image
data between PC and FPGA. For this, you'd need to write a device
driver (in C), and a user-space application (I've done this in
C++/Qt).
A more realistic hobbyist system could be a Xilinx SP601
board, using Ethernet to transfer data - you'd then just have to
write a simple protocol (possibly using raw sockets (no TCP/UDP) to
make the FPGA side Ethernet simpler), which can be done in basically
any language offering network access (there's a Xilinx reference
design for the SP605 demonstrating this).
The simplest and cheapest solution would be an FPGA board with a
serial connection - you probably wouldn't be able to do any
"serious" image processing with this, but it should be enough for
very simple proof-of-concept stuff, although the smaller FPGA devices used o these boards typically do not have much on-board memory available.
But again, it all depends on what you actually want to do.
Can anyone give a reference for a 4-bit ECC algorithm?
I need to implement one for an embedded Nand Flash driver.
Your best bet is probably a Reed Solomon Code. Here is a pretty good explanation of how they work, and here is some code that actually implements the algorithm. It isn't commented very well, sorry about that. Some google action will turn up more.
Good luck.
There are reference implementations readily available for NAND Flash. Check out the implementations in the U-boot and Linux kernel repos.
drivers/mtd/nand/ is the path you want in the repos.