I'm trying to create a custom recipe to sign a u-boot image generated from u-boot.bb.
I have 2 custom recipes:
1. u-boot.bb - clones, compiles and deploys u-boot resulting u-boot.elf.
2. u-boot-sign.bb - depends on u-boot.bb. Gets u-boot.elf, passes it through
custom signing procedure and deploys the result.
For signing I am forced to use a custom procedure which is in a form of python scripts located in external repository.
The part which causes a problem is accessing a deployed u-boot.elf binary file from u-boot.bb recipe. I cannot find a way to expose u-boot.elf binary file to a u-boot-sign.bb recipe.
What should be the correct way of exposing an image binary file from one recipe, to be accessed, signed and deployed in another recipe?
For the purpose of sharing the files between recipes and between cross-compilation scope, I used ${datadir} when installing the binaries (via do_install). This allowed me to access all files described by FILES:${PN} via recipe-sysroot.
u-boot.bb - exporting recipe:
…
do_install() {
install -d ${D}${datadir}/u-boot-2016/
install -m 0644 ${B}/${UBOOT_ELF_BINARY} ${D}${datadir}/u-boot-2016
}
FILES:${PN} = "${datadir}/u-boot-2016"
…
u-boot-sign.bb - depending recipe
…
DEPENDS += " u-boot python3-native"
do_sign() {
${STAGING_BINDIR_NATIVE}/python3-native/python3 sign.py \
-i ${RECIPE_SYSROOT}${datadir}/u-boot-2016/${UBOOT_ELF_BINARY}
}
…
Inspiration from Here
Edit
I was advised not to transfer files like this using ${datadir}. For intermediate files the better approach would be to use to use /firmware as it is done in meta-arm.
u-boot.bb - exporting recipe:
…
do_install() {
install -D -p -m 0644 ${B}/${UBOOT_ELF_BINARY} ${D}/firmware/${UBOOT_ELF_BINARY}
}
FILES:${PN} = "/firmware"
SYSROOT_DIRS += "/firmware"
…
u-boot-sign.bb - depending recipe
…
DEPENDS += " u-boot python3-native"
do_sign() {
${STAGING_BINDIR_NATIVE}/python3-native/python3 sign.py \
-i ${RECIPE_SYSROOT}/firmware/u-boot-2016/${UBOOT_ELF_BINARY}
}
…
It also requires to add /firmware to SYSROOT_DIRS.
Related
Please let me know what I need to do to extract linux-4.4.236.tar.xz from the rpm
My goal is to extract the kernel source and repackage it for use in out build process. We use the standard pattern for this but something funny is happening with some elrepo packages (kernel-lt-4.4.236-1.el7.elrepo.nosrc.rpm specifically)
List the contents of the package
rpm -qlp kernel-lt-4.4.236-1.el7.elrepo.nosrc.rpm
warning: kernel-lt-4.4.236-1.el7.elrepo.nosrc.rpm: Header V4 DSA/SHA1 Signature, key ID baadae52: NOKEY
config-4.4.236-x86_64
cpupower.config
cpupower.service
kernel-lt-4.4.spec
linux-4.4.236.tar.xz
List the contents of a cpio archive
We see linux-4.4.236.tar.xz. So, we'll use the rpm2cpio method and check the contents of the cpio archive but we've a problem as the table lacks linux-4.4.236.tar.xz
rpm2cpio kernel-lt-4.4.236-1.el7.elrepo.nosrc.rpm |cpio -t
config-4.4.236-x86_64
cpupower.config
cpupower.service
kernel-lt-4.4.spec
Extract contents from the archive
When we extract, we see all the items from the table and not linux-4.4.236.tar.xz
rpm2cpio kernel-lt-4.4.236-1.el7.elrepo.nosrc.rpm |cpio -idv
config-4.4.236-x86_64
cpupower.config
cpupower.service
kernel-lt-4.4.spec
514 blocks
This is a NO source RPM and according to this blog post this class of package does not contain source
I need to use some gcloud commands in order to create a Redis instance on GCP as terraform does not support some options that I need.
I'm trying this:
terraform {
# Before apply, run script.
before_hook "create_redis_script" {
commands = ["apply"]
execute = ["REDIS_REGION=${local.module_vars.redis_region}", "REDIS_PROJECT=${local.module_vars.redis_project}", "REDIS_VPC=${local.module_vars.redis_vpc}", "REDIS_PREFIX_LENGHT=${local.module_vars.redis_prefix_lenght}", "REDIS_RESERVED_RANGE_NAME=${local.module_vars.redis_reserved_range_name}", "REDIS_RANGE_DESCRIPTION=${local.module_vars.redis_range_description}", "REDIS_NAME=${local.module_vars.redis_name}", "REDIS_SIZE=${local.module_vars.redis_size}", "REDIS_ZONE=${local.module_vars.redis_zone}", "REDIS_ALT_ZONE=${local.module_vars.redis_alt_zone}", "REDIS_VERSION=${local.module_vars.redis_version}", "bash", "../../../scripts/create-redis-instance.sh"]
}
The script is like this:
echo "[+]Creating IP Allocation Automatically using <$REDIS_VPC-network\/$REDIS_PREFIX_LENGHT>"
gcloud compute addresses create $REDIS_RESERVED_RANGE_NAME \
--global \
--purpose=VPC_PEERING \
--prefix-lenght=$REDIS_PREFIX_LENGHT \
--description=$REDIS_RANGE_DESCRIPTION \
--network=$REDIS_VPC
The error I get is:
terragrunt apply
5b35d0bf15d0a0d61b303ed32556b85417e2317f
5b35d0bf15d0a0d61b303ed32556b85417e2317f
5b35d0bf15d0a0d61b303ed32556b85417e2317f
ERRO[0002] Hit multiple errors:
Hit multiple errors:
exec: "REDIS_REGION=us-east1": executable file not found in $PATH
ERRO[0002] Unable to determine underlying exit code, so Terragrunt will exit with error code 1
I encountered the same issue and resigned myself to pass the values as parameters instead of environment variables.
It involves to modify the script and is a far less clearer declaration, but it works :|
I am trying to add a validation step to a gitlab repo holding a single ansible role (with no playbook).
The structure of the role looks like :
.gitlab-ci.yml
tasks/
templates/
files/
vars/
handlers/
With the gitlab-ci looking like :
stages:
- lint
job-lint:
image:
name: cytopia/ansible-lint:latest
entrypoint: ["/bin/sh", "-c"]
stage: lint
script:
- ansible-lint --version
- ansible-lint . -x 106 tasks/*.yml
I need to skip the naming rule, thus ignoring rule 106.
Otherwise, I would like all files at the root repo to be checked. Since there is no playbook, lint has to be given the files that need to be checked... or at least, that is what I understoodd : I may have this point wrong. But anyway, if I give no name, lint does return ok but actually performs no check.
My problem is that I don't know how to tell him to check all the yaml in a recursive way, or even within a subdirectory. The above code returns an error :
ansible-lint: error: unrecognized arguments: tasks/deploy.yml tasks/localhost.yml tasks/main.yml tasks/managedata.yml tasks/psqlconf.yml
Any idea on how to check all the files from a subdirectory or through the whole role?
PS : I am using cytopia image for ansible-lint, but I have no problem using another, provided it's hosted on dockerhub.
You should certainly be able to pass multiple YAML files as arguments to ansible-lint. I have version 4.1.1a0, and I'm able to use it like this, for example:
anisble-lint -x 106 roles/*/tasks/*.yml
I notice that you seem to have placed a . before your -x 106; that looks like an error. It doesn't look like ansible-lint will accept a directory name as an argument (it doesn't cause it to fail; it just doesn't accomplish anything).
I've tried this both with a locally installed ansible-lint and using the cytopia/ansible-lint image, which appears to perform identically:
docker run --rm -v $PWD:/src -w /src cytopia/ansible-lint -x 106 roles/*/tasks/*.yml
If you want to check all the yaml files, you can use find with exec option, something like this:
find ./ -not -name ".gitlab-ci.yml" -name "*.yml" | xargs ansible-lint -x 106
However ansible-lint -x 106 ./ should work, are you sure that your role really has errors? I've tested it both on ansible-galaxy init generated roles (with meta and all that stuff) and roles which were containing only tasks directory, and it worked every time.
EDIT: I tried creating an error in existing role, replacing "present" with "latest" in package install task
$ ansible-galaxy install geerlingguy.nfs
$ cd ~/.ansible/roles/geerlingguy.nfs
$ sed -i "s/present/latest/g" tasks/setup-RedHat.yml
$ ansible-lint ./
Examining tasks/main.yml of type tasks
Examining tasks/setup-Debian.yml of type tasks
Examining tasks/setup-RedHat.yml of type tasks
Examining handlers/main.yml of type handlers
Examining meta/main.yml of type meta
[403] Package installs should not use latest
tasks/setup-RedHat.yml:2
Task/Handler: Ensure NFS utilities are installed.
and it actually worked, so you may want to run a verbose output to see if actually works, maybe individual yaml file rules are different from whole roles.
When i ran my find-based check i got a lot of extra [204] Lines should be no longer than 160 chars
I'm attempting to create an image with
bitbake core-image-minimal
For my jetson nano (nvidia tegra). I've added the meta-layer for tegra devices from https://github.com/madisongh/meta-tegra
and added it to bblayer.conf. I have also added lines
IMAGE_CLASSES += "image_types_tegra"
IMAGE_FSTYPES = "tegraflash"
to the local.conf file to be able to flash it later.
When I attempt to run the bitbake command to create the image, I get the error message:
ERROR: No recipes available for:
/home/mci/yocto/jetson-nano/meta-tegra/recipes-graphics/vulkan/vulkan-loader_1.1.%.bbappend
/home/mci/yocto/jetson-nano/meta-tegra/recipes-graphics/vulkan/vulkan-tools_1.1.%.bbappend
/home/mci/yocto/jetson-nano/meta-tegra/recipes-graphics/wayland/weston_7.0.0.bbappend
But aren't the files it says there is no recipes for the same recipes it's looking for? Isn't "vulkan-loader_1.1.%.bbappend" a recipe?
How do I solve this problem? Is it because it can't find the files or is the bbappend not the recipes but something else?
Michael,
I don't have an answer for the vulkan pieces but I do have a few pointers since we seem to be going down a similar path with the nano.
Use the warrior branch of yocto
You'll need to download the binary pieces of the nvidia sdk through the SDK manager
Point to these sdk packages in your local.conf with the NVIDIA_DEVNET_MIRROR variable. ex: "file:///home/nvidia/yocto/git/poky/devnet/nano-dev"
Because of the binary pieces in step 2, you need to use an older gcc version which isn't really supported in warrior. I used the linaro-7.2 layer.
Since gcc7 is not supported in warrior, yocto / openembedded will attempt to pass flags to gcc which will make the build fail. Here's a summary, which I hope is complete, to help you through this.
Add DEBUG_PREFIX_MAP="" to local.conf and apply the following patch.
diff --git a/meta/recipes-core/busybox/busybox.inc b/meta/recipes-core/busybox/busybox.inc
index 174ce5a8c0..e8d651a010 100644
--- a/meta/recipes-core/busybox/busybox.inc
+++ b/meta/recipes-core/busybox/busybox.inc
## -128,7 +128,7 ## do_prepare_config () {
${S}/.config.oe-tmp > ${S}/.config
fi
sed -i 's/CONFIG_IFUPDOWN_UDHCPC_CMD_OPTIONS="-R -n"/CONFIG_IFUPDOWN_UDHCPC_CMD_OPTIONS="-R -b"/' ${S}/.config
- sed -i 's|${DEBUG_PREFIX_MAP}||g' ${S}/.config
+ #sed -i 's|${DEBUG_PREFIX_MAP}||g' ${S}/.config
}
# returns all the elements from the src uri that are .cfg files
diff --git a/meta/recipes-core/libxcrypt/libxcrypt.bb b/meta/recipes-core/libxcrypt/libxcrypt.bb
index 3b9af6d739..350f7807a7 100644
--- a/meta/recipes-core/libxcrypt/libxcrypt.bb
+++ b/meta/recipes-core/libxcrypt/libxcrypt.bb
## -24,7 +24,7 ## FILES_${PN} = "${libdir}/libcrypt*.so.* ${libdir}/libcrypt-*.so ${libdir}/libowc
S = "${WORKDIR}/git"
BUILD_CPPFLAGS = "-I${STAGING_INCDIR_NATIVE} -std=gnu99"
-TARGET_CPPFLAGS = "-I${STAGING_DIR_TARGET}${includedir} -Wno-error=missing-attributes"
-CPPFLAGS_append_class-nativesdk = " -Wno-error=missing-attributes"
+TARGET_CPPFLAGS = "-I${STAGING_DIR_TARGET}${includedir} "
+CPPFLAGS_append_class-nativesdk = " "
BBCLASSEXTEND = "nativesdk"
Best of luck! I apologize if this is a bit rough, but I'm just getting through this myself.
I deleted everything and started with a fresh build, did the EXACT same procedure and added all the same lines to the local.conf and bblayer.conf... But this time, bitbake command is running with no errors at all.
Is it possible to create aliases when I enter a certain folder?
What I want:
I use composer a lot (a PHP package manager), which installs binaries in ./vendor/bin. I would like to run the binaries directly from ..
For example:
/path/to/project
| - composer.json // dictates dependencies for the project
| - vendor // libs folder for composer, is created by composer
| | - bin // if lib has bin, composer creates this folder
| | | phpunit // binary
| | | phinx // binary
| | - somelib1 // downloaded by composer
| | - somelib2 // downloaded by composer
Is it possible to get this to work:
> cd /path/to/project
> phpunit
And get phpunit to execute?
Something like "sensing" the composer.json file and dynamically find the binaries in ./vendor/bin and then do something like alias="./vendor/bin/<binary-name> $#" automatically?
I use OS X 10.9 and the boxed in Terminal app.
You can override cd, trap my_function DEBUG to run something on every command, or add a command into PS1 or PROMPT_COMMAND.
These have different behaviour and caveats, and I can't recommend doing any of them for this use case (after having used each of them at some point). They are bad solutions to X-Y problems.
An alternative which is much less likely to break things horribly is to create a custom function to do both things:
cdp() {
cd "$#" && phpunit
}