How can I communicate/pass values from a recipe to the OS environment. The jenkins-pipeline task which runs the bitbake has to know some values from the recipe.
I already tried export but the values didn't show up in the OS environment.
To be more specific, I want to know the git tag (description) that has been used to build in one of the recipes. In other words, the recipe in the meta-layer is checking out a certain git tag and this tag should be used as the build description for Jenkins.
Related
I am trying to get iperf3.2 to build using ENV vars for dirs like prefix.
This works fine on one machine when i run ./configure using paths like $MYDIR, etc...
The problem is after i run configure, and commit the files to my git.
When i clone to a diff machine (like jenkins) and run the build the autoconf decides to re build the makefiles but they have the old paths from the old machine and not the ENV vars.
How can i store/setup the configure call to preserve the $MYDIR and not expand that out so when it decides to rebuld the Makefiles somewhere else it uses the correct DIR based on ENVs?
Forgive my ignorance....
I figured it out.
I was passing in the paths to configure using bash vars like so:
--prefix = "$MYDIR/foo"
which gets evaluated out to real path like /home/user/blah
changed it to this:
--prefix = '${MYDIR}/foo'
which gets placed exactly as displayed which is what i want/need.
I have setup multiple python environment using Anaconda.
Usually, to run a script "manually", I would open a command line and then type:
activate my-env
python path/to/my/script.py
Fine.
Now I am trying to run a script automatically using a scheduler and I was wondering what the difference was between
Writing a batch which activates the environment and the executes the script (like in the snippet above)
Calling directly the python executable from the environment (within the envs/my-enjv/ directory) like below:
/path/to/envs/my-env/python.exe path/to/my/script.py
Both seem to work fine. Is there any difference?
I don't claim to be an expert but here's my 2 cents.
For small scripts, no, there isn't a difference.
You should notice a difference when calling external modules / packages. conda activate alters the system path to change how the command shell searches for the appropriate capabilities.
If you supply a full path to an interpreter and the full path to an isolated script, then the shell doesn't need to do a lookup as this has priority over the path. This means you could be in a situation where the interpreter can see the script but cannot see dependencies.
If you follow the conda activate process, and the environment is correctly packaged, then the shell will be able to trace any additional resources.
EDIT: The idea behind this is portability. If an admin has been careful in setting up a system, then scripts should have the appropriate visibility - i.e. see everything in it's environment plus everything in the main system installation.
It's possible to full-path every call to an interpreter and a script or package location, but then what happens when you need to move it to another machine? You would need to spend a lot of time setting everything up exactly as it was before. On the other hand, you can follow the package process and the system path will trace everything for you.
Simply checkout the PATH variable in your environment. After conda activation it has been extended by
\Anaconda3;
\Anaconda3\Library\mingw-w64\bin;
\Anaconda3\Library\usr\bin;
\Anaconda3\Library\bin;
\Anaconda3\Scripts;
\Anaconda3\bin;
This doesn't make much of a difference, if you are just using the standard library in your code. However, if you rely on external packages like pandas, it's a prerequisite so that the modules can be found.
Good morning, I am reading about the prepared scripts in MacOSX to use when creating a pkg for my application.
In particular, I have some doubts how to make sure that postupgrade script is used.
What I read till now is:
from here
The postupgrade script is run after files have been installed and before the postflight script if one is defined. This script is run only if the component has been previously installed. If the script does not return an exit status of zero, Installer will declare the installation has failed.
Ok then it seems that postupgrade will just run in automatic when an upgrade is done. BUT...from man pkgbuild, section --scripts scripts-path
Archive the entire contents of scripts-path as the package scripts. If this directory contains scripts named preinstall and/or postinstall, these will be run as the top-level scripts of the package. If you want to run scripts for specific bundles, you must specify those in a component property list; see more at COMPONENT PROPERTY LIST. Any other files under scripts-path will be used only if the top-level or component-specific scripts invoke them.
So, it seems I should add it to the component.plist, since they do not say anything about postupgrade. BUT it seems strange, I would put there more specific script, not the postupgrade script.
Reading more, I found it that refers to this, where there is written:
To determine whether a Package has already been installed or not, Installer.app is having a look at the content of the following directory: /Library/Receipts. If there's a file named PackageName.pkg within it, then the Package has already been installed, otherwise it's the first install.
Well, my application leaves no pkg file there, but yes it is present in the InstallHistory.plist.
Well, finally the question: should I set the upgrade script somewhere, for example in the component.plist file? The last link seems to be out of date, something has changed? How can I put a pkg file inside /Library/Receipts? Or better, how can be sure if my installation is indeed an installation and not an upgrade, or viceversa?
Thanks everyone, I am a bit confused...
I have an application development server that is automatically updated every night with a massive shell script that we run with crontab. The script specifies #!/bin/sh at the top of the file and I am not able to change that. The basic purpose of the script is to go through the machine and download the latest code in each of the directories that we list in the script. After all of the repositories are updated, we execute a number of scripts to update the relevant databases using the appropriate virtual environment (Django manage.py commands) by calling that virtualenv's python directly.
The issue that I am having is that we have all the necessary Sphinx plugins installed in one of the virtual environments to allow us to build the documentation from the code at the end of the script, but I cannot seem to figure out how to allow the make command to run inside of the virtualenv so that it has access to the proper packages and libraries. I need a way to run the make command inside of the virtual environment and if necessary deactivate that environment afterwards so that the remainder of the script can run.
My current script looks like the below and gives errors on the latter 3 lines, because sh does not have workon or deactivate, and because make can't find the sphinx-build.
cd ${_proj_root}/dev/docs
workon dev
make clean && make html
deactivate
I was able to find the answer to this question here. The error message that is shown when you attempt to build the sphinx documentation from the root is as follows, and leads to the answer that was provided there:
Makefile:12: *** The 'sphinx-build' command was not found. Make sure
you have Sphinx installed, then set the SPHINXBUILD environment
variable to point to the full path of the 'sphinx-build' executable.
Alternatively you can add the directory with the executable to your
PATH. If you don't have Sphinx installed, grab it from
http://sphinx-doc.org/. Stop.
The full command for anyone looking to build sphinx documentation through a cron when all tools are installed in various virtual environments are listed below. You can find the location of your python and sphinx-build commands by using which while the environment is activated.
make html SPHINXBUILD='<virtualenv-path-to>/python <virtualenv-path-to>/sphinx-build'
I am working on a kernel module for a project using Yocto Linux (version 1.3). I want to use the kernel headers and the compiler and libraries from my Yocto project, but develop the kernel module without needing to run bitbake every time. My initial solution to this was to execute the devshell task and extract the environment variables using something like this:
bitbake mykernel -c devshell
Then in the new xterm window bitbake opened for me:
env | sed 's/\=\(.*\)/\="\1"/' > buildenv #put quotes around r-values in env listing
^D #(I leave the devshell)
Then copy this to my development directory and source it before running make with all of its options
KERNEL_PATH=/mypathto/build/tmp/sysroots/socfpga_cyclone5/usr/src/kernel
source ./buildenv && make -C $KERNEL_PATH V=1 M=`pwd` \
ARCH=arm CROSS-COMPILE=arm-linux-gnueabihf- \
KERNEL_VERSION=3.13.0-00298-g3c7cbb9 \
CC="arm-linux-gnueabihf-gcc -mno-thumb-interwork -marm" \
LD=arm-linux-gnueabihf-ld AR=arm-linux-gnueabihf-ar
Now to my questions:
Am I going about this completely wrong? What is the recommended way to cross-develop kernel modules? I am doing it this way because I don't want to open a bitbake devshell and do my code development in there every time.
This sort of works (I can compile working modules) but the make script gives me an error message saying that the kernel configuration is invalid. I have also tried this with KERNEL_PATH set to the the kernel package git directory (build/tmp/work///git (which contains what appears to be a valid .config file) and I get a similar error.
How can I extract the env without needing to open a devshell? I would like to write a script that extracts it so my coworkers don't have to do it manually. The devshell command opens a completely separate Xterm window, which rather dampens its scriptability...
the sdk installer is what you are looking for:
bitbake your-image -c populate_sdk
then, from your build directory go to tmp/deploy/sdk
and execute the generated shell script.
this script will allow you to generate and install an sdk.
Not only the sdk will allow you to (cross) compile your kernel by providing the needed environment variables and tools, but it will also provide a sysroot + a standalone toolchain to help you easily (and by easily I mean really easily) crosscompile applications with the autotools (as long as you provide Makefile.am and configure.ac)
just source the environment-setup-* file, got to your kernel directory and compile.
Or, for application developpment based on the autotools,
go to the folder containing your project (sources + Makefile.am and configure.ac)
and do:
libtoolize --automake
aclocal
autoconf
automake -a
now your project is ready for compilation:
./configure $CONFIGURE_FLAGS
make
make install DESTDIR=path/to/destination/dir
If you're after a quick hack, instead of Ayman's more complete solution, the scripts run to complete each build stage are available in the directory:
./build/tmp/work/{target_platform}/{package}/{version}/temp/run.do_{build_stage}
These scripts can be run standalone from the ./temp/ directory, and contain all of the environment variables required.