Conditional inclusion of patch file in recipe script - embedded-linux

I have recipe file and my SRC_URI section looks something as follows:
SRC_URI += "file://file1.patch \
file://file2.patch \
file://file4.patch \
"
I want to include a file5.patch under the SRC_URI only if a certain environment variable is set. Is there a way to insert a if condition with the SRC_URI that looks something like this:
SRC_URI += "file://file1.patch \
file://file2.patch \
file://file4.patch \
**if $ENVIRONMENT_VARIABLE:
file://file5.patch**
"
Is there any other way I can achieve the same thing?

Well, the short answer is: yes, you can do this, but it's messy and there's probably a Better Way(TM). So let's answer the question first. If you really want to change the behavior of a recipe using an environment variable, the first challenge is to set the environment variable, and then let bitbake know that your new environment variable is safe and allowable. When you source the oe-init-build-env script to setup your project or subsequently to setup your new shell to continue working on the project, it sets an env variable called BB_ENV_EXTRAWHITE. You must include your new env variable in this list like this:
$ export MYENV_VAR=file5.patch
$ export BB_ENV_EXTRAWHITE="$BB_ENV_EXTRAWHITE MYENV_VAR"
Once this is done, then bitbake won't scrub the environment of your new environment variable.
In your recipe, use a python snippet to conditionally add your patch as follows:
SRC_URI += "${#os.getenv('MYENV_VAR', '')}"
As you can see, it's a bit messy. Of course, you could get a little more complex and test the value of the variable in your recipe, instead of putting the name of the patch file in your environment variable, but this example was the simplest way to demonstrate the concept.
Perhaps a better way is to use an override, and not rely on environment variables. If you are building a bsp with multiple variants, you could use your bsp name as the override, something like this.
SRC_URI_append_mybsp = "file://file5.patch"
This is a much cleaner way to accomplish the same thing. Of course, I'm speculating about your use case. The yocto project reference manual explains overrides. One more suggestion, join #yocto or the yocto project mailing list and you will have access to many smart people to help you.
Hope this helps. ;)

The proper way to accomplish this would be as follows,
1. local.conf
# comment the following line to remove file5.patch
ENV_VAR = "1"
NOTE: Don't forget to include the double quotes, otherwise Yocto will throw error.
2. recipe.bbappend
SRC_URI += "${#bb.utils.contains('ENV_VAR', '1', 'file://file5.patch', '', d)}"
Instead of local.conf you're free to use any .conf file. It's taken from Yocto mailing list

Related

Can reuse makefile multiple times within single execution?

Suppose I have:
# ./Makefile
CLUSTER=dev
include Makefile.cluster.mk
CLUSTER=local
include Makefile.cluster.mk
And in:
# ./Makefile.cluster.mk
${CLUSTER}.cmd:
cmd ${CLUSTER}
So now I can call:
make dev.cmd
make local.cmd
Great! Except the variable is evaluated too late. Running:
$ make local.cmd # cmd local
$ make dev.cmd # Also cmd local !
Make sense: according to: https://www.gnu.org/software/make/manual/html_node/Reading-Makefiles.html
rule steps are deferred evaluation (vs. immediate/on file load).
immediate : immediate ; deferred
deferred
Is there a better/other way to compose a set of make commands without maintaining multiple copies of the same file?
There are lots of ways to do it, even beyond the options above; you could use static pattern rules:
CLUSTERS := dev local
$(CLUSTERS:%=%.cmd) : %.cmd :
cmd $*
If you really want to have stuff in a separate makefile you can use target-specific variables; change your Makefile.cluster.mk to do this:
# ./Makefile.cluster.mk
${CLUSTER}.cmd: CLUSTER := $(CLUSTER)
${CLUSTER}.cmd:
cmd ${CLUSTER}
Is there a better/other way to compose a set of make commands without maintaining multiple copies of the same file?
Often it's pattern rules. In the case of your particular example, you might do
Makefile
%.cmd:
cmd '$*'
However, that particular version will enable any make foo.cmd, which might not be what you want.
Sometimes it's to make better use of the tools available to you. For example,
Makefile.cluster.mk
${CLUSTER}.cmd:
arg='$#'; cmd "$${arg%.cmd}"
That extracts the wanted cluster name from the name of the target.
Occasionally it is $(eval).
(See the manual for an example.)
And from time to time, it's "don't do that." For example,
Makefile
CLUSTERS = dev local
CMDS = $(patsubst %,%.cmd,$(CLUSTERS))
$(CMDS):
arg='$#'; cmd "$${arg%.cmd}"
That defines only dev.cmd and local.cmd targets, and avoids duplicating the recipe.

How does "bitbake virtual/kernel" work if kernel recipes don't have PROVIDES variable set to virtual/kernel?

I'm trying to understand a few pieces associated with using bitbake to compile the linux image and generating a boot image that would be used to flash onto the processor.
How come bitbake virtual/kernel really works? Read through section 2.3 and it says recipes use PROVIDES parameter to add an extra provider, which indicates a recipe could be called in multiple ways (by its name, and by whatever PROVIDES is set to). But the kernel recipes (../poky/meta-bsp/recipes-kernel) I checked didn't have PROVIDES parameter let alone it being set to virtual/kernel.
Upon running bitbake virtual/kernel, how come a boot.img is being generated when it should only just be generating a linux binary i.e vmlinux for instance?
In one of the kernel .inc files, I see:
DEPENDS += " mkbootimg-native openssl-native kern-tools-native"
...
FILESPATH =+ "${WORKSPACE}:"
SRC_URI = "file://kernel \
${#bb.utils.contains('DISTRO_FEATURES', 'systemd', 'file://systemd.cfg', '', d)} \
${#bb.utils.contains('DISTRO_FEATURES', 'virtualization', 'file://virtualization.cfg', '', d)} \
${#bb.utils.contains('DISTRO_FEATURES', 'nand-squashfs', 'file://squashfs.cfg', '', d)} \
mkbootimg-native I reckon refers to the boot image recipe that the kernel recipe depends on, though shouldn't it be the other way around since the boot image should contain the kernel image itself?
lastly, is there a way to put debug prints in different recipe files to see if it's being invoked? I tried echo...to no avail
The recipes you checked probably had PROVIDES. Most if not all kernel recipes inherit kernel class (directly or via some other classes, such as kernel-yocto). The kernel.bbclass actually specifies PROVIDES for you, c.f. http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/classes/kernel.bbclass#n8).
boot.img does not seem to be created by default for any machine. After a quick glance at the code, it seems that this is created by wic for images inheriting the image-live bbclass or by adding live to IMAGE_FSTYPES, c.f. http://docs.yoctoproject.org/ref-manual/classes.html#image-live-bbclass.
From a simple git grep in the poky git repo, it seems only bootimg-efi.py is actually doing something with a boot.img which is called by wic when the -b or --bootimg-dir argument is passed, which is enforced when using wic. So probably the boot.img artifact is created only for EFI machines and images.
If you use echo or printf or similar shell functions (or print in Python tasks) in your tasks, you can only see them in ${WORKDIR}/temp/log.do_<task> of your recipe. Otherwise, you can use bbplain, bbnote, bbdebug, bbwarn, bberror or bbfatal. This will print to both the logs and the console (depending on your log level which is configurable with -D (the more Ds, the higher the log level)).

Chef compile error when capturing shell output

I have a chef recipe that looks something like this:
package 'build-essential' do
action :install
end
cmd = Mixlib::ShellOut.new("gcc -dumpversion")
cmd.run_command
gcc_version = cmd.stdout.strip()
If I execute the recipe on a system where gcc is installed, the recipe runs fine without errors. However, if I run the recipe on a system which doesn't have gcc install I get the error 'no such file or directory - gcc'.
I came to know about the chef two-phases stuff when trying to find a solution to my problem. I was expecting the package installation to satisfy the gcc requirement. How can I tell chef that this requirement will be satisfied later and not throw an error at compile time?
I tried the following, but the attribute does not get updated.
Chef::Resource::RubyBlock.send(:include, Chef::Mixin::ShellOut)
ruby_block "gcc_version" do
block do
s = shell_out("gcc -dumpversion")
node.default['gcc_version'] = s.stdout.strip()
end
end
echo "echo #{node[:gcc_version]}" do
command "echo #{node[:gcc_version]}"
end
Any help is appreciated. Thanks.
So okay, a few issues here. First, forget that Chef::Resource::whatever.send(:include trick. Never do it, literally never. In this case, the ShellOut mixin is already available in all the places anyway.
Next, and more importantly, you've still got a two-pass confusion issue. See https://coderanger.net/two-pass/ for details but basically the strings in that echo resource (I assume that said execute originally and you messed up the coping?) get interpolated at compile time. You haven't said what you are trying to do, but you probably need to use the lazy{} helper method.
And last, don't store things in node attributes like that, it's super brittle and hard to work with.

Get autocompletion list in bash variable

I'm working with a big software project with many build targets. When typing make <tab> <tab> it shows over 1000 possible make targets.
What I want is a bash script that filters those targets by certain rules. Therefore I would like to have this list of make targets in a bash variable.
make_targets=$(???)
[do something with make_targets]
make $make_targets
It would be best if I wouldn't have to change anything with my project.
How can I get such a List?
#yuyichao created a function to get autocomplete output:
comp() {
COMP_LINE="$*"
COMP_WORDS=("$#")
COMP_CWORD=${#COMP_WORDS[#]}
((COMP_CWORD--))
COMP_POINT=${#COMP_LINE}
COMP_WORDBREAKS='"'"'><=;|&(:"
# Don't really thing any real autocompletion script will rely on
# the following 2 vars, but on principle they could ~~~ LOL.
COMP_TYPE=9
COMP_KEY=9
_command_offset 0
echo ${COMPREPLY[#]}
}
Just run comp make '' to get the results, and you can manipulate that. Example:
$ comp make ''
test foo clean
You would need to overwrite / modify the completion function for make. On Ubuntu it is located at:
/usr/share/bash-completion/completions/make
(Other distributions may store the file at /etc/bash_completion.d/make)
If you don't want to change the completion behavior for the whole system, you might write a small wrapper script like build-project, which calls make. Then write a completion function for that mapper which is derived from make's one.

Ruby - Is there a way to overwrite the __FILE__ variable?

I'm doing some unit testing, and some of the code is checking to see if files exist based on the relative path of the currently-executing script by using the FILE variable. I'm doing something like this:
if File.directory?(File.join(File.dirname(__FILE__),'..','..','directory'))
blah blah blah ...
else
raise "Can't find directory"
end
I'm trying to find a way to make it fail in the unit tests without doing anything drastic. Being able to overwrite the __ FILE __ variable would be easiest, but as far as I can tell, it's impossible.
Any tips?
My tip? Refactor!
I didn't mark this as the real answer, since refactoring would be the best way to go about doing it. However, I did get it to work:
wd = Dir.getwd
# moves us up two directories, also assuming Dir.getwd
# returns a path of the form /folder/folder2/folder3/folder4...
Dir.chdir(wd.scan(/\/.*?(?=[\/]|$)/)[0..-3].join)
...run tests...
Dir.chdir(wd)
I had tried to do it using Dir.chdir('../../'), but when I changed back, File.expand_path(File.dirname(__ FILE __)) resolved to something different than what it was originally.
Programming Ruby 1.9 says on page 330 that __FILE__ is read only. It also describes it as a "execution environment variable".
However, you can define __FILE__ within an instance_eval. I don't think that'd help with your problem.

Resources