How to compile a ipk file into a image in yocto? - compilation

I'm working on RDK-B (based on yocto). There is a recipe (let's say my_package_1.0.bb) that builds libraries and populates sysroot with libraries and headers I need for development. I also see that .ipk for my package is created under build/tmp/deploy/ipk/.
My requirement is, I want to share the libraries, headers and a recipe that deploys these in my customer's sysroot directory (for their development), but not the sources for my package. How can I provide the my_package ipk file I created and compile it into the image? How to change the current recipe to use the ipk file instead of source code?
It would be better if there is an example for reference.
Thank you very much.

Related

What does it mean when a package is in the go/pkg/mod/cache dir but it has no source code extracted?

I'm trying to understand how the source code for third-party dependencies is or is not compiled into my Go binary. I'm building in a Docker container, so I can see precisely what's fetched for my build without interference from other builds.
After my go build completes I see source code files for several dependencies under go/pkg/mod/$module#$version directories. The Module cache documentation tells me that these directories contain "extracted contents of a module .zip file. This serves as a module root directory for a downloaded module." My best guess is that the presence of extracted source code for these dependencies indicates that "yes, these dependencies are definitely compiled into your binary."
I also see many more dependencies pulled into go/pkg/mod/cache/download/$module directories. The Module cache documentation tells me that this directory contains "files downloaded from module proxies and files derived from version control systems," which I don't fully understand. As far as I can see, these files do not include any extracted source code, though there are several .zip files that I assume contain the source. For the most part these files seem to be .mod files that just contain text representing some sort of dependency graph.
My question is: if a third-party dependency has module files under go/pkg/mod/cache/download but no source code under go/pkg/mod/$module#$version, does that mean that dependency's code was NOT compiled into my Go binary?
I don't understand why the Go build pulls in all these module files but only has extracted source code for some of the third-party modules. Perhaps Go preemptively parses and pulls module information for the full transitive set of modules referenced from the modules my first-party code imports, but perhaps many of those modules don't end up being needed for my binary's compile + build process and therefore don't get extracted. If that's not true and the answer to my question is no, then I don't understand how or why my binary can link in those dependencies without go build fetching their source code.
As mentioned in "Compile and install packages and dependencies"
Compiled packages are cached automatically.
GOPATH and Module includes:
When using modules, GOPATH is no longer used for resolving imports.
However, it is still used to store downloaded source code (in GOPATH/pkg/mod) and compiled commands (in GOPATH/bin).
So if you see sources in pkg/mod which are not in pkg/mod/cache, try a go mod tidy
add missing and remove unused modules
From there, you should have the same modules between sources (pkg/mod) and compiled modules (pkg/mod/cache)
Based on the OP's comment
I need to know exactly what's included in the binary for compliance reasons.
I would recommend a completely different approach: dumping the list of symbols contained in the binary and then correlating it with whatever information is needed.
The command
go tool nm -type /path/to/the/executable/image/file
would dump the symbols — names of the functions — whose code was taken from both the standard library packages, 3rd-party and/or vendored packages and internal packages, compiled and linked into the binary, and print to its standard output stream a sequence of lines
address type name
which you can then process programmatically.
Another approach you might employ is to use go list with various flags to query the program's source code about the packages and/or modules which will be used when building: whatever that command outputs describing the full dependency graph of the source code is whatever go build will use when building — provided the source code is not changed between these calls.
Yet another possibility is to build the program using go build -x, save the debug trace it produces on its standard error stream and parsing it for exact module names the command reported as used during building.

Integrating changes in the kernel using Yocto using patches

If you want to modify the linux kernel such that it excludes certain modules, you usually go to /kernel/msm-4.9/arch/arm/configs/vendor/<machine-name>_defconfig, which has a bunch of Kconfig symbols, and the ones I want to exclude are commented out, as shown below.
CONFIG_PPP=y
#CONFIG_PPPOL2TP=y
CONFIG_PPP_ASYNC=y
Then I build the linux image by running bitbake virtual/kernel which should ideally have my changes integrated, but when I boot the image, I still see some of the logs of the commented module showing up.
I checked yocto documentation and looks like they create a patch of the file they want to modify, and then append that modified file in the .bbappend file like:
FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
SRC_URI += "file://0001-calibrate-Add-printk-example.patch"
So in my case, if I were to modify /kernel/msm-4.9/arch/arm/configs/vendor/<machine-name>_defconfig, I would:
create a copy of this original file
paste it into poky/meta-bsp/recipes-kernel/linux-msm/files
rename it
include this file in the .bbappend (as shown above)
But how will the above patch override the original /kernel/msm-4.9/arch/arm/configs/vendor/<machine-name>_defconfig that I planned on modifying by this approach?
You are right, sources modifications are done with help of patches in Yocto. Answering your question
But how will the above patch override the original /kernel/msm-4.9/arch/arm/configs/vendor/_defconfig that I planned on modifying by this approach?
it is done automagically:)
That is, build system (bitbake) automatically detects files with extension .patch among SRC_URI content, copies those files from meta-layer directory to build directory somewhere under build/tmp/work/... and automatically applies thous patches on source code (sources directory is defined by S variable in recipe).
But there are some recommendations about other stuff in your question. Some concepts here, skip it to the right way is or even fast way below if bored))
One of the main ideas behind Yocto is reproducible and extensible builds. That means, that all metadata is split into different repositories (that is layers), that are maintained by different teams. Basically teams do not have access to each others repos and none has access to some external project (kernel in your case) source code. Of cource, one may create git-forks of all this, but this is way more complicated than writing metadata. That's why Yocto was created - to take external metadata (ex., poky layer), source code ('kernel') and make it work only by writing your one metadata (your own layer), with no any git-forks.
So, adding your patch and .bbappend to poky layer make no sense, because you wouldn't be able to commit poky layer and distribute this to your customers - poky layer belongs not to you) You may create git-fork however, but that complicates the process of distribution.
That's why the right way is:
create your own BSP-layer
create .bbappend in your layer
add patch to the kernel in your layer (actually the right way is to use .scc files, put patch also will go)
This shall make your build
reproducible - rerunning kernel compilation, or even fresh clone of Yocto with your BSP-layer, would be able to produce the same kernel
distributable (extensible) - customers need only your BSP-layer, other Yocto stuff may be found on internet
And finally, the fast way. If you don't need all this reproducible and extensible but just build some stuff for your home-project and send Yocto to trash, hack the build like this:
clean all kernel-stuff you have bitbake -c cleansstate virtual/kernel
get kernel sources (clone and patch) bitbake -c patch virtual/kernel
handle manually everything you need in kernel sources, found somewhere on working stuff like <TOPDIR>/build/tmp/work/<machine-name>-poky-linux-gnueabi/linux-yocto/<version>/git
build the kernel bitbake -c package virtual/kernel (cloning and patching would be skipped, as bitbake remembers this tasks are done; kernel build artifacts wouldn't be removed, because task rm_work goes after package, and kernel stuff actually is never removed)
But in this case, if this kernel stuff is removed, or breaks somehow, you'll have to walk through this algorithm again (remember - not reproducible).
To easy rebuild the kernel manually (not through bitbake) you can run script <TOPDIR>/build/tmp/work/<machine-name>-poky-linux-gnueabi/linux-yocto/<version>/temp/run.do_compile

ATMEL Studio adding own library

I tried to add my USART library to my project but I am still failing to properly add it so it will be recognized.
I created an USART.c and USART.h file, which I want to add. This is what I tried:
1) Right Click on the Solution / Properties / Toolchain / Directories
2) Adding the Path where I got these two files
When I try to build the project, it did not work. I get the message undefined reference to 'initUSART'.
How do I add my own libraries to projects then?
The screenshot in your question shows that you arranged for the compiler to find the header files for your library. But you also need to use the compiler to compile your library functions (e.g. initUSART) and create a static library file (with a lib prefix and a .a extension). You would need a separate Atmel Studio project for that, or learn how to use the AVR GCC toolchain outside of the IDE to compile libraries. Then you need to put that file in a directory that is in the linker's search path for libraries, and then you need to pass the appropriate -l argument to the linker. For example, if your library is called libuart.a, you need to pass -luart to the linker. The Project Properties for an Atmel Studio project has the relevant settings you need to configure.
GCC has a standard way to compile, create, and link to static libraries, which I outlined above. You can learn about that from any tutorial on GCC static libraries. You then would need to apply that knowledge to the AVR GCC toolchain, and find the appopriate options inside Atmel Studio that you need to set.
Aside: Atmel Studio does not make it easy to use libraries at all. The Arduino IDE does a much better job because you just put the source files for the library in the right place and it compiles them for you. There are a huge number of Arduino libraries too; you wouldn't have to write your own UART driver if you could use the Arduino platform.
The simple alternative: If you don't know much about compiling and linking to C libraries and configuring your IDE, you would have a much easier time just copying the library files into your project, adding them as source files, and letting Atmel Studio compile them just like any other source file in your project.
Another simple way of adding folders to your project is to copy/paste the folder into your project and then open Atmel Studio.
On the right side (where is by default Solution Explorer) you'll see all your files except the ones that you just added. Now press the Show all files and search for you folder which should appear grayed out. Right click on it and Include in project. That should be all!
This image should help
I got another solution that might help . i found Include Directories in this path for MegaAvr(8bit) :
C:\Program Files (x86)\Atmel\Studio\7.0\packs\atmel\ATmega_DFP\1.6.364\include
Just Puts All your Library in one Folder And Copy All of It in this path , then include it like another library . For example I created a folder named "ali" in that path , then i copied all my libraries in this folder (like alcd.h , usart.h) and then included in my programes with this :
#include <ali/usart.h>
and done ! just remember to backup your folder before Windows Installation (Drive Format) . Also you can find your libraries (.h and .c) in Solution Explorer -> Dependencies after Code Compilation .
GoodLuck ...
inside Folder
including xio.h in folder ali
xio.h in dependencies after compilation
my folder in specified path

How does the target know which headers it should include?

I do not understand how Xcode knows which headers should be included into which target? For example if I add a new File to my Xcode Project it adds the .m File to the compile sources of the selected targets but what about the .h files? How does my target know which header files should be used?
Only .m files and resource files are part of the targets, not .h. Headers only need to be copied for a framework target, and only because they are part of the framework release (they allow users to know how to use the framework). Apps don't need the headers because they're compiled stand alone entities. The headers (and the pch file) are used during compilation but aren't required at runtime.
You want files to be members of your target when they:
Form part of the executable (e.g. implementation (.m) files or libraries), or
Are included as files in the application bundle (e.g. images).
Just to give an example via screenshot, the way we control headers in Xcode for libraries is in build phase something like this:
You may further read out this Apple Documentation for setting the visibility of header files in Xcode.

Which Qt DLL's should I copy to make my program stand-alone?

I'm trying to make a distribution directory with my application. I've copied several Qt DLLs to that directory, and the program seems to be working, with one exception: it doesn't seem to find SQL plugin for SQLite. Copying qtsqlite.dll to the directory, doesn't allow my application to open or create SQLite files. What must be the direcotry structure or which additional files need to be copied so that the program can read the database?
you can use depends.exe to see exactly what the dependencies of your exe are and make sure they're all included.
Also, read this page about qt plugins. they are supposed to be in a specific directory called "plugins" and not in the main directory with all the other dlls.
Most probably, the qtsqlite.dll itself depends on original SQLite DLL's which you probably need to copy as well.
Don't forget to include an LGP license copy in your distribution as well as pointers to the original download ressources of the libs you include and their sources. To stay with the law :-)
Thanks to the link #shoosh provided, I was able to fix the problem. I needed to create sqldrivers subdirectory in the distribution dir with qsqlite.dll library inside. But that was just step one. Do you have any tips and resources on creating a full-blown Windows installer? I'm mainly a Linux programmer so this area is unknown to me.

Resources