How to make the output of a snap "parts:" available to "apps:"? - yaml

apps:
library-sample:
command: library_sample
parts:
library:
source: https://github.com/the/sample.git
plugin: cmake
When snapcraft runs the cmake install, "library" will be installed on the system (as I would expect). Also, cmake will also produce a test application in a samples folder under the build directory.
I would like to promote the sample (generated by the "part") to be an installed app within the snap package.
How do I use snap YAML to move from a nested directory under the build folder, into the snaps /bin folder?

You can do this by utilizing Snapcraft's scriptlets. Specifically, the install scriptlet. They essentially allow you to modify the behavior of the build process by customizing sections of it. In the build lifecycle step, snapcraft esentially runs cmake && make && make install, but make install doesn't do everything you want it to do. The install scriptlet runs right after make install, so you can do something like this:
parts:
library:
source: https://github.com/the/sample.git
plugin: cmake
install: |
cp -r path/to/samples $SNAPCRAFT_PART_INSTALL/
Now clean the build step with snapcraft clean -s build and run snapcraft again. Then the samples directory will end up in the final snap.

Related

Building Linux perf from source: how to modify the install directory?

I am following this wiki page to build perf from source as below:
PYTHON=python3 make -C tools/perf install
where ~/bin will be the default build directory.
How can I change the build directory to let's say ~/bin/test? I already have another perf build in ~/bin, and I want to have the new build in a different directory.
I have tried to modify the Makefile (if that is how to do it), but I could not figure it out.
One last silly question: Can I just move my current perf build to another directory or it will screw up its links?
You should be able to easily install into a different directory by specifying prefix=... or DESTDIR=... when running make. You will see this and other info if you run make -C tools/perf help:
$ make -C tools/perf help
...
Perf install targets:
NOTE: documentation build requires asciidoc, xmlto packages to be installed
HINT: use "prefix" or "DESTDIR" to install to a particular
path like "make prefix=/usr/local install install-doc"
install - install compiled binaries
...
Make sure to pass an absolute path to avoid problems (you can use realpath for that):
PYTHON=python3 make -C tools/perf prefix=$(realpath ~/bin/test) install

How can i setup meson and ninja on Ubuntu-Linux to produce the expected .a file by use of MakeFile?

Some years ago on Ubuntu 16.0.4 I've used this library: git clone https://github.com/Beckhoff/ADS and using only the make command I got build, compile and finally on the main directory I found a file called AdsLib-Linux.a and maybe nothing more than this.
Now I'm on Ubuntu 20.04 I need this library once again but this times make dosn't produce the same output and looking forth to the ReadMe instructions I finally used that instead of make:
meson build
ninja -C build
That now create a new directory build but no .a file as before on the root directory. Instead a new file in the build directory libADSLib.a is there. The same thing happens using right the make command.
Maybe the author changed over the years something on the config files or the behavior of the tools have changed, but I cannot get the former file anymore and I need it for other referencing code that now is not executing anymore.
Looking to the MakeFile I found that in the example folder, differently from the one on the parent directory, the MakeFile has something like that:
$(warning ATTENTION make is deprecated and superseeded by meson)
...
${PROGRAM}: LIB_NAME = ../AdsLib-${OS_NAME}.a
...
But all i've tried reading the guides on meson and ninja about setup, configure, build, and so on, did not produce anymore that file.
I've tried also to first build and then copy all files form the example folder to the parent directory and then build again, but again no .a file there.
How's the right way to configure the build process corectly so that this -Linux.a file is created. Or if not possibile anymore, what does it now produce I can use instead of what produced before?
Meson is a build system generator, similar to CMake or somewhat like ./configure, you need to run meson, then run ninja to actually build something.
You need to run both meson and ninja:
meson setup builddir
ninja -C builddir
Once you do that successfully, there will be a libAdsLib.a inside the builddir directory.
Let me correct a bit #dcbaker, according to their README you should setup build as build directory:
# configure meson to build the library into "build" dir
meson build
# let ninja build the library
ninja -C build
Of course, in general, it shouldn't be specific, but their example code is written in a weird way so this path is hard-coded. So, to use the example:
# configure meson to build example into "build" dir
meson example/build example
# let ninja build the example
ninja -C example/build
# and run the example
./example/build/example
About the library: it's now libAdsLib.a and produced in build directory. The name is set here and it's now in linux naming style, the old one - not. So, you have options:
Update your configuration/build files (Makefile?) where you use it
Copy or make symbolic link, e.g.
$ ln -s <>/build/libAdsLib.a <target_path>/AdsLib-Linux.a
Above it's very dependent on your development environment, do you have installation or setup scripts for it? do you permissions to modify/configure parameters for target application? do you need to support both old and new names? - many questions not related to original question about meson.

Package nodejs application with global packages

We have a project which have to be packaged as a zip so we can distribute it to our cliens. With the normal node_modules directory i have no problems. I just put the directory and the node.exe together in my project folder and can start our project on every other computer without installing node or running any npm command.
But now i have a dependecy on phantomjs which needs to be installed as a global package npm install -g phantomjs.
How do i pack modules like this into our project? I first thought of copying phantomjs into the local node_modules directory and set the path variable NODE_PATH to this directory. It doesn't find phantomjs.
Development and client platforms are both windows.
Well, generally it is fine to install global dependencies with the --save flag and call their bins like ./node_modules/phantomjs/bin/phantomjs /*now executes*/ (just as an illustrative example).
However, with with Phantom it's not that simple, since it's downloading binaries and/or even compiling. You would have three options:
ssh into target and just npm install -g phantomjs before or define it in a manifest e.g. Dockerfile just like that, if you are using containers.
Compile it from source as advised here.
If you are using the CLI, then just the --save approach.
So I hardly advise just making a Docker image out of it and ship it as tarball. You can't zip the platform dependent Phantom installation, unfortunately.
Also lots of dependencies like karma-runner-phantomjs look for the path of the global dependencies to resolve it for their use.

How can I extract the environment variables used when building a recipe in Yocto?

I am working on a kernel module for a project using Yocto Linux (version 1.3). I want to use the kernel headers and the compiler and libraries from my Yocto project, but develop the kernel module without needing to run bitbake every time. My initial solution to this was to execute the devshell task and extract the environment variables using something like this:
bitbake mykernel -c devshell
Then in the new xterm window bitbake opened for me:
env | sed 's/\=\(.*\)/\="\1"/' > buildenv #put quotes around r-values in env listing
^D #(I leave the devshell)
Then copy this to my development directory and source it before running make with all of its options
KERNEL_PATH=/mypathto/build/tmp/sysroots/socfpga_cyclone5/usr/src/kernel
source ./buildenv && make -C $KERNEL_PATH V=1 M=`pwd` \
ARCH=arm CROSS-COMPILE=arm-linux-gnueabihf- \
KERNEL_VERSION=3.13.0-00298-g3c7cbb9 \
CC="arm-linux-gnueabihf-gcc -mno-thumb-interwork -marm" \
LD=arm-linux-gnueabihf-ld AR=arm-linux-gnueabihf-ar
Now to my questions:
Am I going about this completely wrong? What is the recommended way to cross-develop kernel modules? I am doing it this way because I don't want to open a bitbake devshell and do my code development in there every time.
This sort of works (I can compile working modules) but the make script gives me an error message saying that the kernel configuration is invalid. I have also tried this with KERNEL_PATH set to the the kernel package git directory (build/tmp/work///git (which contains what appears to be a valid .config file) and I get a similar error.
How can I extract the env without needing to open a devshell? I would like to write a script that extracts it so my coworkers don't have to do it manually. The devshell command opens a completely separate Xterm window, which rather dampens its scriptability...
the sdk installer is what you are looking for:
bitbake your-image -c populate_sdk
then, from your build directory go to tmp/deploy/sdk
and execute the generated shell script.
this script will allow you to generate and install an sdk.
Not only the sdk will allow you to (cross) compile your kernel by providing the needed environment variables and tools, but it will also provide a sysroot + a standalone toolchain to help you easily (and by easily I mean really easily) crosscompile applications with the autotools (as long as you provide Makefile.am and configure.ac)
just source the environment-setup-* file, got to your kernel directory and compile.
Or, for application developpment based on the autotools,
go to the folder containing your project (sources + Makefile.am and configure.ac)
and do:
libtoolize --automake
aclocal
autoconf
automake -a
now your project is ready for compilation:
./configure $CONFIGURE_FLAGS
make
make install DESTDIR=path/to/destination/dir
If you're after a quick hack, instead of Ayman's more complete solution, the scripts run to complete each build stage are available in the directory:
./build/tmp/work/{target_platform}/{package}/{version}/temp/run.do_{build_stage}
These scripts can be run standalone from the ./temp/ directory, and contain all of the environment variables required.

Can't install OpenCv on Mac: there are no files in /usr/local

I've downloaded tar.gz files from official site(versions 2.4.3, 2.4.7, 2.4.8). Then unzipped them somewhere.
mac-mini-olia:Data olia$ cd opencv-2.4.6.1/
mac-mini-olia:opencv-2.4.6.1 olia$ mkdir build
mac-mini-olia:opencv-2.4.6.1 olia$ cd build/
mac-mini-olia:build olia$ cmake -G "Unix Makefiles" ..
mac-mini-olia:build olia$ make -j8
after the last command output is
3910 warnings and 12 errors generated.
Errors are like
/Volumes/Data/opencv-2.4.6.1/3rdparty/libjpeg/._jcapimin.c:1:4096: error: source file is not valid UTF-8
and
/Volumes/Data/opencv-2.4.6.1/3rdparty/libpng/._pngerror.c:1:2: error: expected identifier or '('
And after that in /usr/local/lib and /usr/local/include there are no files of opencv.
There are multiple ways to install OpenCV on OSX.
You can use MacPorts
Make sure you have XCode installed with it's Command Line Tools first.
(An easy way to test that is to see if xcodebuild if a found command in Terminal)
After you install MacPorts simply do
sudo port install opencv
This will take care of building the project from source and installing it for you.
(The path might be /opt/local/lib /opt/local/include though, haven't used it in a while)
There are also port variants: opencv with options. For example if you plan to use openni 1.5.x and have it integrated with opencv you can try
sudo port install opencv +openni
If you do
sudo port variants opencv
you should get a list of all the options(e.g. python support, qt support, etc.)
If you want to build from source yourself, I recommend installing the ccmake command (I think macports can also do that for you) or use the CMake gui tool. This will allow you to easily configure the build and setup your install location(/use/local/...and so on)
So you can try someting like this:
cd /path/to/your/opencv_folder
mkdir build && cd build
ccmake ..
At this stage you should see something like this:
hit Enter to change an option and Enter again to exit edit mode and use the up/down keys to scroll through the options. When you're happy with the settings, press C to configure. Once that's done, you can press G to generate. This will generate the makefiles for you so you can do this:
make
sudo make install
make install will actually copy the built libraries/headers to the /usr/ folder.
You might run into errors when running make, depending on your setup(e.g. if you're missing dependencies, etc.), but the cool thing about ccmake is that you can go back, run it again, disable the things you don't want to build right now and go back to the make stage.

Resources