I tried to build the BSP for v3msk (linux based embedded system) on Ubuntu 18.04 following the link:
https://elinux.org/R-Car/Boards/Yocto-Gen3-ADAS#Building_the_BSP_for_Renesas_ADAS_boards
I used Yocto v3.21.0
The local.conf I used is available here https://pastebin.com/UyBGzQ2J
I tried adding x11 to distro features.
DISTRO_FEATURES_append = " x11"
I ran
bitbake core-image-x11
and I expect it to build yocto images with X11.
I got error :
ERROR: Nothing PROVIDES 'core-image-x11'
core-image-x11 was skipped: missing required distro feature 'x11' (not in
DISTRO_FEATURES)
What could be missing in local.conf?
Nothing PROVIDES 'core-image-x11'
means you don't have image file with this name in the meta layers list of your build/conf/bblayers.conf file.
try to execute command:
find source| grep images| grep x11
to see if you have any layer containing images related to x11. add discovered layer to the build/conf/bblayers.conf file then retry your command:
bitbake core-image-x11
Related
So I have a Yocto build and I need to install this 3rd party RPM. I've created a recipe for it using the link to the source RPM. Problem is when implementing the do_install() function.
I need to install it, and it's installed via rpm --install rpm_package, and then I need to enable the service.
For the service I know I have to inherit systemd in my recipe file, but for the installation I'm still confused.
The install command only creates directories and copies files over.
Any help is appreciated.
Indeed do_install only create install directories in your /tmp/work/cortex[...]/my_recipe_name/ directory
Normally if you bitbake your image which include your recipe you should have a warning like:
Files/directories were installed but not shipped in any package
So, after the installation you need to "package them" in order to be in your final linux image, to do so use FILES_$PN like below for instance:
FILES_${PN}_append = " /usr/sbin/"
where /usr/bin is the directory which contain what you want to "install" in your image.
Then the package will be shipped in your final image.
For the service, indeed inherit systemd is mandatory + you have to add in do_install
install -Dm0644 ${WORKDIR}/my_script.service ${D}${systemd_system_unitdir}/my_script.service
and at the end of your .bbfile
SYSTEMD_SERVICE_${PN} = "my_script.service"
You can install packages at runtime with rpm, but it is not recommended for production builds, as that is the whole idea behind Yocto which is creating a custom distribution with all your needs integrated.
In order to use rpm command in runtime you need to configure it to fetch the sources from the right location. And the right location is generally a Yocto build, because it is compatible with your board's architecture and system (Otherwise, you need to handle that manually).
You can link the Yocto build rpm files to the board's rpm command using smart, for more info check first the official Yocto documentation here.
Also, you can check more example here and here.
IMPORTANT
I do not recommend creating a systemd/sysvinit service to install an rpm/deb/ipk/tar package.
The idea is, if you install a package with rpm it means that it is already supported by Yocto and has a recipe. So, simply:
IMAGE_INSTALL_append = " package"
I have a question regarding including different tools into Yocto image recipe. Currently I am building image recipe for my Avenger96 board. I have created a base image and it runs fine on the device. But when I try to do sgdisk after booting it says -sh: sgdisk: command not found. I understand that these commands are not available by default and need to install it.
But I am not sure how to do it given my board is not connected to internet. Can I include these commands/tools in image recipe? I want to use some other commands like ufw, etc but I have same issue with them too.
Can someone please let me know how to do this?
Your help will be much appreciated.
Thanks in advance.
P.S: I am using Ubuntu 20.04 with Yocto as build system.
sgdisk is present under recipe: meta/recipes-devtools/fdisk/gptdisk_xx.bb
For xx it depends on your poky version.
For dunfell this is the recipe here.
ufw is present in meta-openembedded/meta-networking/recipes-connectivity/ufw
So, make sure meta-openembedded/meta-networking is present in your bblayers.conf and to include both of them add the following line to local.conf or to your custom image file:
IMAGE_INSTALL_append = " gptfdisk ufw"
If you still do not find sgdisk try gptdisk-sgdisk.
If you want to add any recipe in the future, try to look for it in the official yocto git repositories in this link.
It is not recommended to add tools manually into the board, unless you are in the development process and you need to gain time, so here are some ideas for you:
Create an image for development that includes all dev features (gcc, g++, cmake, ..etc)
Include git and other fetching tools
Clone the tool's source code and compile it in the board
Or: bitbake the recipe with Yocto and copy the output binary directly to the image via ssh or other ways.
Question: What are the steps to install a kubectl plugin on Windows?
I have written a plugin standalone binary that I would like to invoke from within kubectl (following the instructions in https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/)
The documentation for installation states to perform the following steps:
"A plugin is nothing more than a standalone executable file, whose name begins with kubectl-. To install a plugin, simply move this executable file to anywhere on your PATH."
This works fine on Mac and Linux, but performing those instructions on Windows does not seem to work. Running "kubectl plugin list" does not list my plugin and I cannot invoke it from within kubectl. I even tried adding my binary to the .kube directory autogenerated by kubectl, and it does not detect the plugin.
Several discussions on github reference this issue, without providing a response of how to install a kubectl plugin on Windows (ex: https://github.com/kubernetes/kubernetes/issues/73289). And after performing a lengthy google/stackoverflow search, there don't seem to be any tutorials/solutions that I (or my teammates) could locate. Any help would be much appreciated! Thank you.
In my case I don't have an issue with installing a plugin on Windows 10 machine (by simply including it on my PATH). Here is the output of 'kubectl plugin list':
c:\opt\bin>kubectl plugin list
The following kubectl-compatible plugins are available:
c:\opt\bin\kubectl-getbuildver.bat
- warning: c:\opt\bin\kubectl-getbuildver.bat identified as a kubectl plugin, but it is not executable
c:\opt\bin\kubectl-hello.exe
c:\opt\bin\kubectl-helloworld.p6
- warning: c:\opt\bin\kubectl-helloworld.p6 identified as a kubectl plugin, but it is not executable
error: 2 plugin warnings were found
Instead I'm encountering a known github issue: 'not supported by windows' error, while invoking my plugin with kubectl (v1.13.4).
c:\opt\bin>kubectl hello
not supported by windows
c:\opt\bin>kubectl-hello.exe
Tuesday
*kubectl-hello.exe - is console application written in csharp. I tried also to use Windows batch file and Perl6 program as plugins, but none of these worked out on Windows.
I think only .exe file extensions are considered as executables by kubectl when it searches for plugins in the $PATH when running in Windows environment.
I tested by creating a simple HelloWorld App as a single file executable, added it to my system's $PATH and it got picked up and executed correctly.
kubectl krew like brew to manage the kubectl plugin. You can try it. It supports Window.
https://github.com/kubernetes-sigs/krew
i want to use wkhtmltopdf in my php application.
therefor i added wkhtmltopdf to my apt.yml file and hoped that everything will work...
...unfortunately, it doesn't.
everytime i run wkhtmltopdf google.ch output.pdf i get the following error:
wkhtmltopdf: error while loading shared libraries: libGL.so.1: cannot open shared object file: No such file or directory
does anybody know how to setup wkthtmltopdf correct in the php-builtpack of cloud foundry?
Two possibilities:
You are missing shared libraries dependencies. You'll need to add those to apt.yml so they get installed as well. It looks like libgl1-mesa-dev might be what you're missing. There could be others though. If you run ldd wkthtmltopdf, you can see a list of all the dependencies & what's missing.
The dependencies are installed, but they're not found when you try to run wkthtmltopdf. If you're running cf ssh to go into an app container so you can run wkthtmltopdf this might be the issue. Try running cf ssh "<app-name>" -t -c "/tmp/lifecycle/launcher /home/vcap/app bash ''" instead. Otherwise, you need to manually source the .profile.d/* scripts. Buildpacks set env variables in these scripts and they often indicate where shared libraries can be loaded.
Hope that helps!
I'm trying to build a Haskell Stack project whose extra-deps includes opencv, which in itself depends on OpenCV 3.0 (presently only buildable from source).
I'm following the Docker integration guidelines, and using my own image which builds upon fpco/stack-build:lts-9.20 and installs OpenCV 3.0 (Dockerfile.stack.opencv).
If I build my image I can confirm that opencv is installed and visible to pkg-config:
$ docker build -t stack-opencv -f Dockerfile.stack.opencv .
$ docker run stack-opencv pkg-config --modversion opencv
3.3.1
However if I specify this image in my stack.yml:
docker:
image: stack-opencv
Attempting to stack build yields:
Configuring opencv-0.0.2.0...
setup: The pkg-config package 'opencv' version >=3.0.0 is required but it
could not be found.
I've run the build without the Docker integration, and it completes successfully.
The Dockerfile is passing CMAKE_INSTALL_PREFIX=$HOME/usr.
When running docker build the the root user is used, and thus $HOME is set to /root.
However when doing stack build the stack user is used, they do not have permission to see /root, and thus pkg-config cannot find opencv.
By removing the -D CMAKE_INSTALL_PREFIX=$HOME/usr flag from cmake, the default prefix (/usr/local) is used instead. This is also accessible to the stack user, and thus pkg-config can find it during a stack build.