We have got one build-config for a couple of similar images. They differ in development and deployment recipes. We also want to have 2 different kernels with different config's.
The Linux kernel is specified in conf/machine/my-machine.conf
PREFERRED_PROVIDER_virtual/kernel ?= 'my-kernel'which points to recipe-kernel/linux/my-kernel.bb
How to include a 2nd development kernel my-kernel-dev.bb in the same build config?
Related
I wrote a C++ program with multiple classes and divided it into multiple files, which is intended to run on an embedded device (raspi 2 to be specific) that has no internet access. Therefore building the source and installing the dependencies on every device would be very laborious.
Is there a way to compile the program on one of the devices (as an exception to the others, this one has internet access), so that I can just transfer the build files, e.g. via USB, to the other devices? This should also include the various libraries I used so that I don't have to install them on every device. These are mainly std, but also a self-cloned and build library and a with apt installed library (I linked the libraries used as an example, but they shouldn't affect the process, I guess).
I'm using CMake. Is there an option, to make CMake compile a program into a (set of) files that run independently of the system-installed libraries with other words: they run without the need to have the required libraries installed on the system, but shipped with the build files.
Edit:
My main problem is, that I cannot get a certain dependency on the target devices due to a lack of internet access. Can I build the package and also include the library in that build, without me having to install it?
I'm not sure I fully understand why you need internet for your deployment but I can give you several methods and you'll choose the one you seem the best.
Method A: Cloning the SD card image
During your development phase, you ended to successfully have a RPi device working and you want to replicate this. You can use some tools to duplicate your image on another SD card, N times and eventually, this could be sufficient to make it work.
Pros: Very quick method
Cons: Usually, your development phase involve adjustments, trying different tools, different versions, etc.. your original RPi image is not clean and so you'll replicate this. Definitely not valid for an industrial project for could be sufficient for a personal one.
Method B: Create deployments scripts
You can create a deployment script on your computer to copy, configure, install what you need. Assuming you start with a certain version of Raspberry pi OS, you flash it, then you boot your PI that is connected via Ethernet for example. You can start a script on your computer that will:
Copy needed sources / packages / binaries
(Optional) compile sources (if you have a compiler that suits your need on RPi OS)
Miscellaneous configuration
To do all these, a script like this can do the job:
PI_USERNAME="pi"
PI_PASSWORD="raspberry"
PI_IPADDRESS="192.168.0.3"
# example on how to execute a command remotely
sshpass -p ${PI_PASSWORD} ssh ${PI_USERNAME}#${PI_IPADDRESS} sudo apt update
# example on how to copy a local file on the RPI
sshpass -p ${PI_PASSWORD} scp local_dir/nlohmann/json.hpp ${PI_USERNAME}#${PI_IPADDRESS}:/home/pi/sources
Importante note:
Hard coded credentials is not recommended.
This script is assuming you are using linux but you'll find equivalent tool under Windows.
This assume your RPI has a fixed IP but you can still improve this script to automatically find the RPI on the network. (lots of possibilities)
Pros: While you create this deployment script, you'll force yourself to start from a clean image and no dirty environment is duplicated.
Cons: Take a bit longer than method A
Method C: Create your own Raspberry PI image using Yocto
Yocto is a tool to create your own images, suitable for Raspberry PI. You can customize absolutely everything and produce an sdcard image you can just flash your RPis SD cards.
Pros: Very complete tool, industrial process
Cons: Quite complicated to deal with, not suitable for beginners, time cost
Saying in the comments it's only for 10 devices and that you were a bit scared to cross compile, I would not promote the Yocto method for you. I would not recommend the method A as well mostly because of the dirty environment duplication (but up to you in the end). The method B with the deployment script may be the best to go.
I been compiling a ".c / .c++" code which takes 1.5hour to compile on 4 core machine using "make" command.I also have 10 more machine which i can use for compiling. I know "-j" option in "make" which distribute compilation in specified number of threads. but "-j " option distribute threads only on current machine not on other 10 machine which are connected in network.
we can use MPI or other parallel programing technique but we need to rewrite "MAKE" command implementation according to parallel programing language.
Is there is any other way by which we can make use of other available machine for compilation???
thanks
Yes, there is: distcc.
distcc is a program to distribute compilation of C or C++ code across
several machines on a network. distcc should always generate the same
results as a local compile, is simple to install and use, and is often
two or more times faster than a local compile.
Unlike other distributed build systems, distcc does not require all
machines to share a filesystem, have synchronized clocks, or to have
the same libraries or header files installed. Machines can be running
different operating systems, as long as they have compatible binary
formats or cross-compilers.
By default, distcc sends the complete preprocessed source code across
the network for each job, so all it requires of the volunteer machines
is that they be running the distccd daemon, and that they have an
appropriate compiler installed.
They key is that you still keep your single make, but gcc the arranges files appropriately (running preprocessor, headers, ... locally) but arranges for the compilation to object code over the network.
I have used it in the past, and it is pretty easy to setup -- and helps in exactly your situation.
https://github.com/icecc/icecream
Icecream was created by SUSE based on distcc. Like distcc, Icecream takes compile jobs from a build and distributes it among remote machines allowing a parallel build. But unlike distcc, Icecream uses a central server that dynamically schedules the compile jobs to the fastest free server. This advantage pays off mostly for shared computers, if you're the only user on x machines, you have full control over them.
I want to be able to debug my driver more efficiently by being able to step thru source code without all the hopping around. Is there a way to specify that just a certain driver would get the -O0 optimization? And the rest can get whatever is the normal Linux Kernel Op, e.g. -O3?
I am using Brian Gladman's library for EAX encryption in one of my project.
The problem is the code works on my local development environment (Ubuntu running under virtualbox) but the same code does not work (encryption incorrect) on a system running on Amazon AWS Cloud.
I have checked the GCC version and both my local environment and on the cloud. The versions are same:
gcc version 4.4.5 (Ubuntu/Linaro
4.4.4-14ubuntu5)
In what cases can this happen? Any ideas?
There are any number of things that could cause this. It's not just the compiler, it could be:
the version of the C libraries in use.
undefined behaviour (or even bugs) on the part of the encryption library.
environment variable setting such as PATH or LIBPATH which can affect the compilation/linking processes.
I don't present that as an exhaustive list. The number of possibilities is actually quite large.
You would probably have to debug it on the target environment to see exactly why it's not functioning as expected.
I remember a few years ago(2002) there was a multipartite virus that could be run natively on linux and windows. I don't know if a compiler could be specially craft an executable so that it could be read as both ELF and PE, so that the os would start executing at different entry points. Or a program that could merge two programs, one compiled using mingw, one compiled in native linux, to one program.
I don't know if such a program exists, or could it exist, and I'm know this could be implemented in Java or some scripting language, but that's not a native program.
Imagine the possibilities, I could deploy a program with linux and window (and perhaps os/x)libraries, and one main executable that could be run on any os. The cross-platform support would compensate the bigger size.
Windows programs have a DOS stub in the beginning, and I just ran an ELF executable through debug.com, which said that the first instruction of this exe was JG 0x147. Just maybe something could be done with this...
No.
Windows and Linux use vastly different binary file formats. See Portable Executable (Windows) and Executable and Linkable Format (Linux).
Something like WINE will run Windows executables on Linux but that's not the same thing.
This is actually a really terrible idea for multiple reasons.
Cross-compiling across operating system boundaries is extremely difficult to do properly.
If you go for the second route (building separate PE binaries on Windows and ELF on Linux, and then somehow merging them) you have to maintain two machines, each running a different OS and the full build stack, and you'd have to make sure that you tested both versions separately before gluing them together.
Dynamic linking is already a pain to properly manage, on Windows and on Linux; static linking can generate binaries that are much more inconvenient to deal with than whatever imaginary benefits you get from providing one single file type to your end-user.
If you want to run the same binary executable file on multiple OSes, your options are Java, Mono, and potentially NativeClient, the browser plug-in Google's developing to work around the "webapps are too slow" problem.