C/ C++ Build standalone executable (including libraries) - gcc

I wrote a C++ program with multiple classes and divided it into multiple files, which is intended to run on an embedded device (raspi 2 to be specific) that has no internet access. Therefore building the source and installing the dependencies on every device would be very laborious.
Is there a way to compile the program on one of the devices (as an exception to the others, this one has internet access), so that I can just transfer the build files, e.g. via USB, to the other devices? This should also include the various libraries I used so that I don't have to install them on every device. These are mainly std, but also a self-cloned and build library and a with apt installed library (I linked the libraries used as an example, but they shouldn't affect the process, I guess).
I'm using CMake. Is there an option, to make CMake compile a program into a (set of) files that run independently of the system-installed libraries with other words: they run without the need to have the required libraries installed on the system, but shipped with the build files.
Edit:
My main problem is, that I cannot get a certain dependency on the target devices due to a lack of internet access. Can I build the package and also include the library in that build, without me having to install it?

I'm not sure I fully understand why you need internet for your deployment but I can give you several methods and you'll choose the one you seem the best.
Method A: Cloning the SD card image
During your development phase, you ended to successfully have a RPi device working and you want to replicate this. You can use some tools to duplicate your image on another SD card, N times and eventually, this could be sufficient to make it work.
Pros: Very quick method
Cons: Usually, your development phase involve adjustments, trying different tools, different versions, etc.. your original RPi image is not clean and so you'll replicate this. Definitely not valid for an industrial project for could be sufficient for a personal one.
Method B: Create deployments scripts
You can create a deployment script on your computer to copy, configure, install what you need. Assuming you start with a certain version of Raspberry pi OS, you flash it, then you boot your PI that is connected via Ethernet for example. You can start a script on your computer that will:
Copy needed sources / packages / binaries
(Optional) compile sources (if you have a compiler that suits your need on RPi OS)
Miscellaneous configuration
To do all these, a script like this can do the job:
PI_USERNAME="pi"
PI_PASSWORD="raspberry"
PI_IPADDRESS="192.168.0.3"
# example on how to execute a command remotely
sshpass -p ${PI_PASSWORD} ssh ${PI_USERNAME}#${PI_IPADDRESS} sudo apt update
# example on how to copy a local file on the RPI
sshpass -p ${PI_PASSWORD} scp local_dir/nlohmann/json.hpp ${PI_USERNAME}#${PI_IPADDRESS}:/home/pi/sources
Importante note:
Hard coded credentials is not recommended.
This script is assuming you are using linux but you'll find equivalent tool under Windows.
This assume your RPI has a fixed IP but you can still improve this script to automatically find the RPI on the network. (lots of possibilities)
Pros: While you create this deployment script, you'll force yourself to start from a clean image and no dirty environment is duplicated.
Cons: Take a bit longer than method A
Method C: Create your own Raspberry PI image using Yocto
Yocto is a tool to create your own images, suitable for Raspberry PI. You can customize absolutely everything and produce an sdcard image you can just flash your RPis SD cards.
Pros: Very complete tool, industrial process
Cons: Quite complicated to deal with, not suitable for beginners, time cost
Saying in the comments it's only for 10 devices and that you were a bit scared to cross compile, I would not promote the Yocto method for you. I would not recommend the method A as well mostly because of the dirty environment duplication (but up to you in the end). The method B with the deployment script may be the best to go.

Related

What is the correct way to use Intel Advisor on a remote machine?

Intel VTune Amplifier has the possibility to profile a parallel application executed on a remote machine.
Intel Advisor doesn't have such an option. According to this document, you have to use the command-line version of Intel Advisor:
This makes it possible to automate many tasks as well as analyze an
application running on remote hosts
However, the GUI version has many features not offered by the cl version (like suggestions about how to solve vectorization/multi-thread inefficiency etc).
I tried to run advixe-cl on the remote machine and then copy locally the project (and produced results). It works, but some features are lost. As last chance I tried to ssh -X the remote machine and the use advixe-gui, but it seems that the main core of my Xeon Phi KNL is too weak to ruun properly such a graphic application.
What is the correct/best use of Intel Advisor in such a scenario?
The recommended way is described by you here: "run advixe-cl on the remote machine and then copy locally the project".
But you mentioned that "some features were lost". What did you loose exactly?
The key defficiency of given command-line+GUI approach is that you may not see your source code in "Source View" tabs initially. To overcome this limitation, you have to adjust Project Properties of your local project copy and specify "Source Search" and sometimes to "Binaries/Symbol Search" specifying directories providing path to the location where original source code and sometimes executable binarry plus DWARF/pdb debug info files are located.
In case you used "-no-auto-finalize" option in command line (which is more advanced scenario), you may also need to use Re-Finalize feature (available only starting from 2017 Update 2 new release) or (for older versions) make sure that you provide Binary/Symbol/Source Search after opening local project copy, but before "Show My Result" upload data action.

how to parallelize "make" command which can distribute task on multiple machine

I been compiling a ".c / .c++" code which takes 1.5hour to compile on 4 core machine using "make" command.I also have 10 more machine which i can use for compiling. I know "-j" option in "make" which distribute compilation in specified number of threads. but "-j " option distribute threads only on current machine not on other 10 machine which are connected in network.
we can use MPI or other parallel programing technique but we need to rewrite "MAKE" command implementation according to parallel programing language.
Is there is any other way by which we can make use of other available machine for compilation???
thanks
Yes, there is: distcc.
distcc is a program to distribute compilation of C or C++ code across
several machines on a network. distcc should always generate the same
results as a local compile, is simple to install and use, and is often
two or more times faster than a local compile.
Unlike other distributed build systems, distcc does not require all
machines to share a filesystem, have synchronized clocks, or to have
the same libraries or header files installed. Machines can be running
different operating systems, as long as they have compatible binary
formats or cross-compilers.
By default, distcc sends the complete preprocessed source code across
the network for each job, so all it requires of the volunteer machines
is that they be running the distccd daemon, and that they have an
appropriate compiler installed.
They key is that you still keep your single make, but gcc the arranges files appropriately (running preprocessor, headers, ... locally) but arranges for the compilation to object code over the network.
I have used it in the past, and it is pretty easy to setup -- and helps in exactly your situation.
https://github.com/icecc/icecream
Icecream was created by SUSE based on distcc. Like distcc, Icecream takes compile jobs from a build and distributes it among remote machines allowing a parallel build. But unlike distcc, Icecream uses a central server that dynamically schedules the compile jobs to the fastest free server. This advantage pays off mostly for shared computers, if you're the only user on x machines, you have full control over them.

Embedded system USB update via opkg

I need to provide a single update file to a customer in order to update an embedded system via USB. The system is built using Yocto. I'm curious if the plan I have to implement USB updating is viable, or if I'm missing something that should be obvious.
opkg exists on the system, but in order to use opkg update it needs to have a repo to pull from. Since I have no network capabilities I will need to put the entire repo on a USB drive. Since I need to provide a single file to the customer the repo will then need to be a tar file.
Procedure
Plug USB drive in
The udev rule calls a script and pushes it to the background since this will be a long process (see this)
Un-tar the repo update file
opkg update
Notify user they may remove USB drive
At least from a high-level point of view does this sound like it is a good way to update an embedded system via USB? What pitfalls might exist?
Your use case is covered by SWUpdate - it is maybe worth to take a look at my project (github.com/sbabic/swupdate). Upgrading from USB with a single image file is one of the use case, and you can use meta-swupdate (listed on openembedded) to generate the single image with all artifacts.
Well, regarding what pitfalls might exist, one of the biggest pitfall would likely be a power outage in the middle of the process. How do you recover in that case? (The answer might depend on what type of embedded device your making). (Personally, I'm in favour of complete image based upgrades and not package based upgrades).
Regarding your scenario with the tarball, do you have enough space on your device to unpack out? It might be wiser to instead of distributing a tarball with your opkg repository, to distribute eg a ext2 or squashfs image of the repository. That would allow you to mount it from the USB drive using the loopback device.
Apart from that, as long as you have a good way to communicate work the user, your approach should work. The main issue is what do you do in case of an interrupted upgrade process. That is something you need to think about in advance.
When deciding on the update format and design there are several things to consider; including what you want to update (e.g. kernel, applications, bootloader); there is a paper on the most popular designs here: https://mender.io/user/pages/04.resources/_white-papers/Software%20Updates.pdf
In your case (no network, lots of storage available on USB) the easiest approach is probably a full rootfs update. I am involved in an open source project Mender.io which does a dual A/B rootfs update and integrates with the Yocto Project to make it easy and fast to enable updates on devices without custom low-level coding.

Not able to install Gentoo Linux

Here is my situation, when I download Gentoo and start to run it and downloaded the stage III Tarball from links and then tried to extract it a stream of white sentences flows down my screen really fast for about a minute just like in the YouTube tutorial I was viewing. However, after that instead of going to the correct stage it says cannot right not enough space on device and I tried repartitioning it but I'm not sure what device it is talkingaboutHowever, after that instead of going to the correct stage it says cannot right not enough space on device and I tried repartitioning it but I'm not sure what device it is talking about. Please help
Sorry you're having this issue, though in general, I truly believe Gentoo Handbook is quite well written and even a newbie can follow it... Here are some advices that I hope I can give you (most important is, digest the handbook and follow it carefully, not that I'm saying "RTFM", it's just that for Gentoo, handbook is essential and without it, we can get lost if you're just starting).
From my experiences, the "stream of white sentences" I'd presume would be from verbose un-tar'ing your stage3. Usually, I only want to see the errors so my suggestions is to remove the "v" (i.e. from "tar xjvpf" to "tar xjpf") so that only errors would appear when un-tar'ing. The caveat to this is that you'll be wondering if it hung or is busy un-tar'ing. Use Alt-F1 and Alt-F2 (if on console/tty mode back-and-forth) to log in on another TTY and do 'ps -auxf' to see if it's still tar'ing. If you're using GUI Terminal, just open another tab and 'ps auxf', you get the picture...
Also, learn the commands "df", it'll come in handy. If you're running out of disk space, perhaps you're trying to install/untar stage3 to your ramdisk (grin) rather than your mounted (i.e. "/mnt/gentoo"). Mount your root '/' device to '/mnt/gentoo' and cd to that mounted path then try it (don't forget to mount your '/boot' as well as your proc, dev, sys, etc before you chroot - again, follow the handbook as carefully as you can - oh also, distro such as Debian hybrids including Ubuntu uses symlink to shm, so read that part about 'rm /dev/shm' and follow it carefully; if you're using Gentoo LiveCD, you can ignore that part).
Other useful commands if you're confused (or new to) mounting devices would be to learn to experiment with commands such as 'lsblk' and 'mount' (by itself) to inspect the sizes of your partition (again, use of 'df' comes in handy as well) as well as what is your device (i.e. /dev/sda1 versus /dev/sdb1). Hint: when you do 'mkfs', use "-L" (or for some file system, it's "-N") to label/name your devices, so that when you use commands such as 'mount' or 'lsblk', you can spot them easier. If you're using GUI/desktop versions of some distro, hopefully there are tools such as "gparted" which can give you visual information in GUI of your devices which can be helpful. One think I'd advise you to stay away from if you're just starting, is to avoid RAID (i.e. mdadm) until you're comfortable with how grub/lilo works. Get your kernel (Gentoo-sources) compiled and MBR written (i.e. grub-install), try booting and have fun first (oh also, if you can avoid GUI like installing Gnome/KDE from the get-go, avoid it as well - you'll get into issues such as "should I use SystemD or OpenRC" and then get hit by the obstacle of some gnome parts needs you to use systemd but you've chosen openrc, and so on).
If I may add my opinions, in my opinion, Gentoo (also Arch and FreeBSD) is an excellent place to start if you want to learn the inside of Linux application workings (library dependencies, why packages are important rather than downloading each libs manually and compile them one by one, etc). I hope this won't discourage you from switching to another distro, but if it does frustrate you on installation and all you want to do is test-drive Linux, there are much easier distro that you'd not have to understand USE and other compilation mechanisms (if you have an old i586, it makes sense to build it with pick-and-chose libraries so that leaner can be faster, but if you have fast machine, why compile binaries when somebody who is expert at it already have done it for you?). SUSE and Fedora/RedHat/CentOS used to be the least frustrating for it was able to find/detect hardwares (legacy and new) but these days, I usually tell people, "if you know how to install Windows, you can install Ubuntu" so that too may be a good way to wet your feet. Good luck!
0_o wow, well.. how about some 411 like size of your hdd and exactly how you partioned it? Linux will look for specific directorys and if missing will instead start to install into the root dir. How you partion is an importent first step. Once you got a generally good partion setup most linux installs will go fine. Most basic tables include /, /home,/var and a swap.

Automating Win32 Driver Testing

Does anyone know ways of partially or fully automating driver test installation?
I am new to driver development and am used to more of a test-driven approach in higher level languages, so moving to the kind of environment where I can't easily test as I go has been a step up for me. I am using Virtual PC for my test environment and currently have to reset it, open device manager, choose the device, click through a bunch of "Are you really sure you wouldn't rather install one of these system drivers" type dialogs, then finally reset the test environment while restarting WinDbg in the host machine just as the test environment is booting up... argh.
After repeating this process many, many times already, surely there has to be a be a better way of doing this? What tools/methods/tricks do commercial driver developers use to run up their driver in a test environment?
Note, this isn't about unit testing drivers, I haven't got to that stage yet or know if it is even possible. This is just about firing up a test environment with WinDbg attached to make sure that some small change I may have done is doing what I expect.
It seems to me that a virtualization software + a "mock objects" (layering) approach (as suggested by Aaron Digulla) + scripts (as suggested by Sergius) can simplify device driver development.
But if you use Visual Studio to develop user-level applications, you can use it for kernel device driver development too with VisualDDK (+ VirtualKD to debug over a named pipe, which is faster than over a virtual COM port), which addresses specifically the annoyances that you mention; from its home page:
... This project brings the simplicity and
convenience of Windows application
development to the driver development
world. No more manual creation of
build scripts, copying of driver
files, installing drivers from INFs,
switching between WinDbg and the
source editor or waiting for seconds
after each step due to the extra-slow
virtual COM port. Just create a driver
project using a convenient Driver
Wizard, select a virtual machine, and
enjoy debugging your driver directly
from Visual Studio. Want to test a
change? Just normally press Shift-F5,
modify your driver, rebuild it and
launch again. VisualDDK will unload
the old driver, install the new one
and load it automatically and quickly.
Bored with WinDbg loading symbol files
for minutes and looking up symbols for
seconds? Just let VisualDDK optimize
this for you using its own DIA-based
symbol engine. Using C++/STLPort in
your drivers? VisualDDK will natively
visualize all STL containers and
strings, as good as Visual Studio does
for user-mode applications. ...
You can write some shell scripts (using sc.exe and devcon.exe) to automate deployment tasks (no opening device manager, clicking on buttons, etc). And make snapshot of the system ready to debug (needn't wait for system boot).
Don't forget to check your driver with DriverVerifier!
Example of my own script :)
sc create FsFilter type= filesys binPath= c:\FSFilterDrv.sys
sc start FsFilter
pause
sc stop FsFilter
sc delete FsFilter
Follow the advice I gave here. Basically, test as little as possible with the real system.
In your case, I've got another tip: Virtual PC is using a virtual hard disk (that's probably a file on your real hard disk).
You don't need to install your driver, you can simply replace the new files in the virtual hard disk. This is often not possible in the running system but in a virtual system, you can open the virtual disk file and change it (since Windows isn't locking the files in it).
I'm not sure about Virtual PC but other emulators come with tools to work with virtual disk images. If VPC can't do it, check out VirtualBox.
It all depends a little on what kind of driver you are writing. But in many cases, writing an appropriate makefile (or something similar) that handles driver installation, start/stop, and launching of a test harness can already be good enough.
I also configure all of my test machines to automatically logon (AutoAdminLogon), map net drives, and launch an appropriate command prompt after startup. Running a specific test is then a matter of typing in a single command only.
One word concerning VirtualPC: VirtualPC is very handy for kernel mode development, but do not forget that it emulates a uniprocessor machine only -- so be sure to regularly test the code on a multiprocessor machine as well. That said, the VHD trick may seem handy, but it somewhat ties you to Virtual PC -- writing appropriate scripts that equally work on VirtualPC as on a real machine therefore seems a better approach to me.
Finally, consider it a shameless plug, but if you are looking for a unit testing framework for Windows kernel mode code, I have written one: cfix.
I think the DevCon utility (described in this OSR Online article) will help you. You should be able to setup batch files that do the job on one click.
It's free to sign up with osronline.com, and you'll probably have to sign up to get to that article. And if you are writing drivers, you WANT to sign up. These guys have been doing this for a long time, and there's a LOT of really good info on that web site.

Resources