U-boot Standalone Application at Startup - embedded-linux

Is it possible to include standalone applications/startup scripts in the u-boot bootup process, and what is the available hooks?
So far I can see from the hello_world example how to compile a standalone application in C, but it still needs to be loaded manually through tftp which I don't want to do.
EDIT: I have found several "hooks" listed in common.h such as
last_stage_init()
board_late_init()
Where can I find an idea of the proper workflow to add an application to make adjustments to the environment variables?

The basic answer here is that you can have whatever you want run in the CONFIG_BOOTCOMMAND variable and that in turn can load and 'go' your application from wherever you have stored it on the device.

I wanted to use a u-boot startup script but did not know how to proceed and wrongfully used the term application.
I now use the hooks specified in board_r.c, for example misc_init_r() and last_stage_init() where I put my startup scripts required before boot.
Just remember to enable these functions with #define CONFIG_LAST_STAGE_INIT and #define CONFIG_MISC_INIT_R()

Related

Where is the kernel config in AOSP Android 10?

I've found various kernel configs in kernel/configs/q.
When I alter them, and run mm in kernel/msm-4.14 the kernel is not rebuilt.
Where do I edit the kernel config, such that a kernel rebuild is forced when mm is run?
The kernel is built separately from the Android platform first. Then the Android platform build system is pointed at where the kernel image is located, using the TARGET_PREBUILT_KERNEL environment variable.
Here is an outline of how I usually configure and build. I have done it this way for both Android 9 and 10, for various vendors. The scheme I use is mentioned in the docs here. Non-Google kernels usually don't come with version control (repo), I don't know what you're dealing with so I'll cover both.
Configuring the kernel
For repo-checkout kernels, you do the config in build/build.config. Basically, after defconfig was taken as basis, you use the ${KERNEL_DIR}/scripts/config tool to alter the config. This usually looks as follows:
POST_DEFCONFIG_CMDS="check_defconfig && update_config"
function update_config() {
${KERNEL_DIR}/scripts/config --file ${OUT_DIR}/.config \
-d CONFIG_SOMETHING_I_DISABLE \
-e CONFIG_SOMETHING_I_ENABLE \
--set-val CONFIG_FOO = 123
}
If you don't have a repo-checkout kernel, locations and details may differ but the basic idea is usually the same: Find/Create the script that kicks of the build, and add invocations of the config tool after making defconfig.
Run the config tool by its own to see full options and more info on its usage, but the above is usually all you need. Beware: If you make syntactically-correct invalid changes (e.g. enable symbols of which the dependencies are not met), the build system will NOT complain and ignore these changes silently. If you face this situation, e.g. use menuconfig to find out what's wrong, as it shows dependencies.
Building AOSP / Making boot.img
After you've built your kernel, you will have Image.lz4 in out/.../dist (or Image.gz in out/.../private/msm-google/arch/arm64/boot). You go to your Android source, and in addition to the usual things (source build/envsetup.sh, lunch) you point the build system at the image you built, e.g. export TARGET_PREBUILT_KERNEL=/path/to/Image.lz4. Then just start the build normally, e.g. make bootimage or m droid.
Note that for Android 10 at least in some cases, you'll have to copy over the kernel modules from out/.../dist too, since the new kernel can't load the old ones. With this part, I am having problems myself at the moment. I think they have to be copied to device/VENDOR/DEVICE (e.g. google/coral-kernel), you may also copy your kernel image there btw, since the original prebuilt one also is there by default. The problem is that at least in my case, the new kernel modules were not copied to device after all.

How to add netconsole to kernel with OpenWRT

I'm using OpenWRT and I'm trying to work with netconsole instead of serial cable to debug kernel messages. By default, netconsole is not defined in OpenWRT, and I can't add it through menuconfig. There is no documentation for it anywhere. Any help to add netconsole to the kernel would be much appreciated! Thanks
Run: make kernel_menuconfig and select the option.
But that alone won't help you. Netconsole requires kernel boot parameters to be set, most important, the destination address where the console messages are sent. You need to modify your bootloader to pass that parameter to the kernel.
I found one way to do this.
First you need to look at your .config file, which is found in the linux folder with the version you working with.
for example, I'm working with qca/src/linux-3.14.
This .config is being build in the compilation.
you can see the field
# CONFIG_NETCONSOLE is not set
The configurations in this file will define what will be built and what not.
So in order to build this module, go to your target folder, in linux/generic/ there is another configs files, mine is config-3.14. yours will be as your linux version you using.
change CONFIG_NETCONSOLE is not set to CONFIG_NETCONSOLE=m
and add CONFIG_NETCONSOLE_DYNAMIC=y'.
Now in compilation, the first .config file will be with the correct configuration and netconsole.ko module will be created.
This is valid for adding to kernel any module that affected by .config file.
Of course, you need to add this module manually, or add the module as part of kernel CONFIG_NETCONSOLE=y but I've had some problems with this.

Compiling example UBOOT standalone application

I am trying to build a UBOOT standalone application.
Looking at the README and the minimal documentation on this, I am curious.
Is the hello world example standalone, simply just compiled/cross-compiled the same way any other application is? Obviously for whatever the target architecture is.
Do I need to use makeelf or something to get a .bin file?

Is there any way to simulate LD_LIBRARY_PATH in Windows?

I have a program do so some graphics. When I run it interactively, I want it to use OpenGL from the system to provide hardware accelerated graphics. When I run it in batch, I want to be able to redirect it to use the Mesa GL library so that I can use OSMesa functionality to render to an offscreen buffer. The OSMesa functionality is enabled by doing a LoadLibrary/GetProcAddress if the batch start up option is selected.
On Linux, its fairly easy to make this work. By using a wrapper script to invoke the program, I can do something like this:
if [ "$OPTION" = "batch" ]; then
export LD_LIBRARY_PATH=$PATHTO/mesalibs:$LD_LIBRARY_PATH
fi
It is possible to do something this in Windows?
When I try adding a directory to the PATH variable, the program continues to go to the system opengl32.dll. The only way I can get the program to use the Mesa GL/OSMesa shared libraries is to have them reside in the same directory as my program. However, when I do that, the program will never use the system opengl32.dll.
If I've understood what you're saying correctly, the wrong version of opengl32.dll is being loaded when your process starts up, i.e., load-time dynamic linking. There is probably no good way to solve your problem without changing this.
You say you can't use conveniently use run-time dynamic linking (LoadLibrary/GetProcAddress) for opengl32.dll because the calls to it are coming from the Qt library. I presume that the Qt library is itself dynamically linked, however, so you should be able to solve your problem by using run-time linking for it. In this scenario, provided you load opengl32.dll before you load the Qt library, you should be able to explicitly choose which version of opengl32.dll you want to load.
You might want to consider using delayed loading in order to simplify the process of moving from load-time to run-time linking. In this scenario, the first call into the Qt library causes it to be loaded automatically, and you'll just need to explicitly load opengl32.dll first.
There are a few ways you could handle this, depending on the libraries and their names/locations:
If both have the same name (opengl32.dll), then you need to add the Mesa DLL location to the search path such that it is searched before the system directory. The order directories are checked in is detailed here. As you can see, $PATH comes last, after system, so you can't just add the directory to that. However, you can make use of the second step ("The current directory") by setting the working directory to a path containing the mesa files. Generally this means starting the application using an absolute path while in the directory containing the files.
That's still not particularly pleasant, though. If you can, you should use LoadLibrary and check for an environment variable (OPENGL_LIBRARY_PATH) when your app starts up. Assuming the exports from opengl32.dll and Mesa's DLL are the same, you can do something like:
void LoadExports()
{
char location[MAX_PATH];
getenv("OPENGL_LIBRARY_PATH", location);
HMODULE oglLib = LoadLibrary(location);
function1 = GetProcAddress(oglLib, "glVertex2f");
...
}
This will work perfectly fine, doing almost exactly what you want.
However, if you want to do that, you can't import opengl32.dll, which you're probably doing, you have to dynamically link throughout. Make sure not to link against opengl32.lib and you should be fine. Depending on how many functions you use, it may be a pain to set up, but the code can easily be scripted and only needs done once, you can also use static variables to cache the results for the lifetime of the program. It's also possible to use different function names for different libraries, although that takes a bit more logic, so I'll leave the details to you.
Though this should be possible in the cmd window, it seems you're having no luck.
Try: set a variable in your script (RUNNING_IN_SCRIPT=Y) and then parse for that variable in your executable and LoadLibrary from the absolute path of installation - be sure to clear the variable when you exit.
Windows used to search different paths for dynamic libraries, but due to security consideration, the system path is searched first.
You could, however use Delay Load Imports to get a workaround:
If you're using MSVC, you could single-out the DLLs you're interested in loading on your own with /DELAYIMPORT flag to the linker.
Then, override the delay load helper function and use LoadLibrary to find the proper DLL (and not trust it to the system).
After loading the correct DLL, have your helper function just call the original one that will do all the GetProcAddress business by itself.

How to debug into my apache module built using a Makefile?

Firstly, I come from Windows-VisualStudio-C++ background. Now I am developing in a Ubuntu environment.
With the help of a Makefile, I built a mymodule.so and copied it to the modules folder within apache. Now, it appears that the module is working fine. But I would like to debug into this module to understand it better.
So, first, is there any way I can get something similar to the Visual Studio debugger type of feel while debugging this module?
Now, i read that i can use gdb to debug into apache modules, can somebody tell me in detail how this is done or point me to some resource that does it.
Ideally, i would like to single step and stuff. I am trying Code::Blocks IDE which has some debugging support. Using the IDE and custom make file, I build the module. Copied it to module location, but how do i debug.
How do i hook to the apache process. Should I use Attach to Process. I tried that with the pid of httpd, but with no success.
Also, while building is there some flag that i should set so that the .so file is debuggable?
I am pretty basic with Linux because i come from windows programming background. Kindly suggest how I go about this.
Thanks in advance,
Arjun
I think you can attach to an apache process using gdb (at 1111, where 1111 is the PID of the process, or in Code::Blocks) and set breakpoints in your module functions, if the module was compiled with debug. You will need to be root or the same user as the apache process.
gcc -g flag is used to build binaries with the debug info.

Resources