How to check if g++ supports c++14/17 in SCons? - c++11

I'm using scons as my build system.
Some of my code is for cpp14 and cpp17. They're in folders like "newcpp". I wish my SConstruct/Sconscript could check if my g++ supports these flags, pseudo code like below:
import os,sys
env = Environment(ENV = {'PATH' : os.environ['PATH']})
if [CXX_SUPPORTS('-std=c++14')]
env.SConscript(dirs=['newcpp'])
I know that automake/configure supports this kind of check. How to do it in scons?

Most likely you want to use Configure Contexts.
See this section of the Users Guide
https://scons.org/doc/production/HTML/scons-user/ch23.html
And this section of the manpage:
https://scons.org/doc/production/HTML/scons-man.html#configure_contexts
So likely you'll want something like this:
env = Environment(CFLAGS='-std=c++14')
conf = Configure(env)
if conf.CheckCXX():
print("Yes CXX14")
env.SConscript(dirs=['newcpp'])
env = conf.Finish()

Related

How does "bitbake virtual/kernel" work if kernel recipes don't have PROVIDES variable set to virtual/kernel?

I'm trying to understand a few pieces associated with using bitbake to compile the linux image and generating a boot image that would be used to flash onto the processor.
How come bitbake virtual/kernel really works? Read through section 2.3 and it says recipes use PROVIDES parameter to add an extra provider, which indicates a recipe could be called in multiple ways (by its name, and by whatever PROVIDES is set to). But the kernel recipes (../poky/meta-bsp/recipes-kernel) I checked didn't have PROVIDES parameter let alone it being set to virtual/kernel.
Upon running bitbake virtual/kernel, how come a boot.img is being generated when it should only just be generating a linux binary i.e vmlinux for instance?
In one of the kernel .inc files, I see:
DEPENDS += " mkbootimg-native openssl-native kern-tools-native"
...
FILESPATH =+ "${WORKSPACE}:"
SRC_URI = "file://kernel \
${#bb.utils.contains('DISTRO_FEATURES', 'systemd', 'file://systemd.cfg', '', d)} \
${#bb.utils.contains('DISTRO_FEATURES', 'virtualization', 'file://virtualization.cfg', '', d)} \
${#bb.utils.contains('DISTRO_FEATURES', 'nand-squashfs', 'file://squashfs.cfg', '', d)} \
mkbootimg-native I reckon refers to the boot image recipe that the kernel recipe depends on, though shouldn't it be the other way around since the boot image should contain the kernel image itself?
lastly, is there a way to put debug prints in different recipe files to see if it's being invoked? I tried echo...to no avail
The recipes you checked probably had PROVIDES. Most if not all kernel recipes inherit kernel class (directly or via some other classes, such as kernel-yocto). The kernel.bbclass actually specifies PROVIDES for you, c.f. http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/classes/kernel.bbclass#n8).
boot.img does not seem to be created by default for any machine. After a quick glance at the code, it seems that this is created by wic for images inheriting the image-live bbclass or by adding live to IMAGE_FSTYPES, c.f. http://docs.yoctoproject.org/ref-manual/classes.html#image-live-bbclass.
From a simple git grep in the poky git repo, it seems only bootimg-efi.py is actually doing something with a boot.img which is called by wic when the -b or --bootimg-dir argument is passed, which is enforced when using wic. So probably the boot.img artifact is created only for EFI machines and images.
If you use echo or printf or similar shell functions (or print in Python tasks) in your tasks, you can only see them in ${WORKDIR}/temp/log.do_<task> of your recipe. Otherwise, you can use bbplain, bbnote, bbdebug, bbwarn, bberror or bbfatal. This will print to both the logs and the console (depending on your log level which is configurable with -D (the more Ds, the higher the log level)).

How to check if a specific program (shell command) is available on a Linux in D?

I am trying to write a script-like D program, that would have different behaviour based on availability of certain tools on user's system.
I'd like to test if a given program is available from command line (in this case it is unison-gtk) or if it is installed (I care only about Ubuntu systems, which use apt)
For the record, there is a walk around using e.g. tryRun:
bool checkIfUnisonGTK()
{
import scriptlike;
return = tryRun("unison-gtk -version")==0;
}
Instead of tryRun, I propose you grab the PATH environment variable, parse it (it is trivial to parse it), and look for specific executable inside those directories:
module which1;
import std.process; // environment
import std.algorithm; // splitter
import std.file; // exists
import std.stdio;
/**
* Use this function to find out whether given executable exists or not.
* It behaves like the `which` command in Linux shell.
* If executable is found, it will return absolute path to it, or an empty string.
*/
string which(string executableName) {
string res = "";
auto path = environment["PATH"];
auto dirs = splitter(path, ":");
foreach (dir; dirs) {
auto tmpPath = dir ~ "/" ~ executableName;
if (exists(tmpPath)) {
return tmpPath;
}
}
return res;
} // which() function
int main(string[] args) {
writeln(which("wget")); // output: /usr/bin/wget
writeln(which("non-existent")); // output:
return 0;
}
A natural improvement to the which() function is to check whether tmpPath is an executable, or not, and return only when it found an executable with given name...
There can't be any «native D solution» because you are trying to detect something in the system environment, not inside your program itself. So no solution will be «native».
By the way, if you are really concerned about Ubuntu only, you can parse output of command dpkg --status unison-gtk. But for me it prints that package 'unison-gtk' is not installed and no information is available (I suppose that I don't have enabled some repo that you have). So I think that C1sc0's answer is the most universal one: you should try to run which unison-gtk (or whatever the command you want to run is) and check if it prints anything. This way will work even if user has installed unison-gtk from anywhere else than a repository, e.g. has built it from source or copied a binary directly into /usr/bin, etc.
Linux command to list all available commands and aliases
In short: run auto r = std.process.executeShell("compgen -c"). Each line in r.output is an available command. Requires bash to be installed.
man which
man whereis
man find
man locate

Conditional inclusion of patch file in recipe script

I have recipe file and my SRC_URI section looks something as follows:
SRC_URI += "file://file1.patch \
file://file2.patch \
file://file4.patch \
"
I want to include a file5.patch under the SRC_URI only if a certain environment variable is set. Is there a way to insert a if condition with the SRC_URI that looks something like this:
SRC_URI += "file://file1.patch \
file://file2.patch \
file://file4.patch \
**if $ENVIRONMENT_VARIABLE:
file://file5.patch**
"
Is there any other way I can achieve the same thing?
Well, the short answer is: yes, you can do this, but it's messy and there's probably a Better Way(TM). So let's answer the question first. If you really want to change the behavior of a recipe using an environment variable, the first challenge is to set the environment variable, and then let bitbake know that your new environment variable is safe and allowable. When you source the oe-init-build-env script to setup your project or subsequently to setup your new shell to continue working on the project, it sets an env variable called BB_ENV_EXTRAWHITE. You must include your new env variable in this list like this:
$ export MYENV_VAR=file5.patch
$ export BB_ENV_EXTRAWHITE="$BB_ENV_EXTRAWHITE MYENV_VAR"
Once this is done, then bitbake won't scrub the environment of your new environment variable.
In your recipe, use a python snippet to conditionally add your patch as follows:
SRC_URI += "${#os.getenv('MYENV_VAR', '')}"
As you can see, it's a bit messy. Of course, you could get a little more complex and test the value of the variable in your recipe, instead of putting the name of the patch file in your environment variable, but this example was the simplest way to demonstrate the concept.
Perhaps a better way is to use an override, and not rely on environment variables. If you are building a bsp with multiple variants, you could use your bsp name as the override, something like this.
SRC_URI_append_mybsp = "file://file5.patch"
This is a much cleaner way to accomplish the same thing. Of course, I'm speculating about your use case. The yocto project reference manual explains overrides. One more suggestion, join #yocto or the yocto project mailing list and you will have access to many smart people to help you.
Hope this helps. ;)
The proper way to accomplish this would be as follows,
1. local.conf
# comment the following line to remove file5.patch
ENV_VAR = "1"
NOTE: Don't forget to include the double quotes, otherwise Yocto will throw error.
2. recipe.bbappend
SRC_URI += "${#bb.utils.contains('ENV_VAR', '1', 'file://file5.patch', '', d)}"
Instead of local.conf you're free to use any .conf file. It's taken from Yocto mailing list

Determine compiler name/version from gdb

I share my .gdbinit script (via NFS) across machines running different versions of gcc. I would like some gdb commands to be executed if the code I am debugging has been compiled with a specific compiler version. Can gdb do that?
I came up with this:
define hook-run
python
from subprocess import Popen, PIPE
from re import search
# grab the executable filename from gdb
# this is probably not general enough --
# there might be several objfiles around
objfilename = gdb.objfiles()[0].filename
# run readelf
process = Popen(['readelf', '-p', '.comment', objfilename], stdout=PIPE)
output = process.communicate()[0]
# match the version number with a
regex = 'GCC: \(GNU\) ([\d.]+)'
match=search(regex, output)
if match:
compiler_version = match.group(1)
gdb.execute('set $compiler_version="'+str(compiler_version)+'"')
gdb.execute('init-if-undefined $compiler_version="None"')
# do what you want with the python compiler_version variable and/or
# with the $compiler_version convenience variable
# I use it to load version-specific pretty-printers
end
end
It is good enough for my purpose, although it is probably not general enough.

How to install a directory recursively with waf

I currently use following valadoc build task to generate a api documentation for my vala application:
doc = bld.new_task_gen (
features = 'valadoc',
output_dir = '../doc/html',
package_name = bld.env['PACKAGE_NAME'],
package_version = bld.env['VERSION'],
packages = 'gtk+-3.0 gee-1.0 libxml-2.0 x11 gdk-x11-3.0 libpeas-gtk-1.0 libpeas-1.0 config xtst gdk-3.0',
vapi_dirs = '../vapi',
force = True)
path = bld.path.find_dir ('../src')
doc.files = path.ant_glob (incl='**/*.vala')
This tasks creates a directory html in the output directory including several subdirectories with html and picture files.
What I am know trying to do is to install such files to /usr/share/doc/projectname/html/. To do so I added the following to the wscript_build file (following the documentation I have found here):
output_dir = doc.bld.path.find_or_declare('../doc/html')
doc.outputs = output_dir.ant_glob (incl='**/*')
doc.bld.install_files('${PREFIX}/share/doc/projectname/html', doc.outputs)
However this leads to an error "Missing node signature". Does anyone know how to get around this error? Or is there a simple way to install a directory recursively with waf?
You can find a full-fledge sample here.
I had a similar issue with generated files and had to update the signature for the corresponding Node objects. Try creating a task:
def signature_task(task):
for x in task.generator.bld.path.find_dir('../doc/html').ant_glob('**/*', remove=False):
x.sig = Utils.h_file(x.abspath())
To the top of you build rule, try adding:
#Support running task groups serially
bld.post_mode = Build.POST_LAZY
Then at the end of your build, add:
#Previous tasks belong to a group
bld.add_group()
#This task runs last
bld(rule=signature_task, always=True, name="signature_task")
There is an easier way using relative_trick.
bld.install_files(destination,
bld.path.ant_glob('../doc/html/**'),
cwd=bld.path.find_dir('../doc/html'),
relative_trick=True)
This gets a list of files from the glob, chops off the prefix, and puts it into the destination.

Resources