Is there a command or some tools that can help you get the corresponding CONFIG_XXX options to enable a module. For example, if I want to enable module nvme, which CONFIG_XXX should be y or m?
I know there are some documents which may states the config of nvme. But I want to know if there is a command or tool which can help you get the CONFIG——XXX only if you type a command.
Thanks.
if I want to enable module nvme, which CONFIG_XXX should be y or m?
As far as I know, there is no documentation or single-purpose command that would report the specific configuration symbol that builds a module.
However the Makefile that actually specifies the building of the module in question is the sole authoritative source for this information.
Typically the relevant Makefile would be in the subdirectory (or the parent directory) as the source module.
If you're not sure where the source module resides, then you could search all the Makefile files in the kernel source for the conditional build of the .o object file:
$ find . -name Makefile | xargs grep nvme.o
./drivers/nvme/host/Makefile:obj-$(CONFIG_BLK_DEV_NVME) += nvme.o
... <irrelevant search results>
$
So the answer would be CONFIG_BLK_DEV_NVME.
Note that the subdirectory that has the relevant Makefile will also have the Kconfig file that describes the configuration symbol you just identified.
Rather than manually edit the .config file, use the make menuconfig command.
Using the menuconfig assures you that your configuration will meet all dependencies and trigger all auto-selections properly.
You can use the search feature (simply type the slash character, /, and the config name) to retrieve help text to guide you to the location of the configuration prompt.
The help text and status of CONFIG_BLK_DEV_NVME could look like:
Symbol: BLK_DEV_NVME [=n]
Type : tristate
Prompt: NVM Express block device
Location:
-> Device Drivers
(1) -> NVME Support
Defined at drivers/nvme/host/Kconfig:4
Depends on: PCI [=n] && BLOCK [=y]
Selects: NVME_CORE [=n]
Selected by [n]:
- NVM [=n] && BLOCK [=y] && PCI [=n]
The current configuration state/status of each configuration entry mentioned is displayed within square brackets and an equals sign.
The Depends on: line indicates that both CONFIG_PCI and CONFIG_BLOCK have to be enabled in order for the CONFIG_BLK_DEV_NVME prompt to be even visible.
You may have to use the search capability to convert these other CONFIG_xxx names to their menu prompts and locations.
The Selects: line indicates the other configuration entries that will be automatically enabled if this config item is selected.
The Selected by [x]: line(s) indicates the other configuration entries that could automatically enable this config item. In this case the logical expression indicates that when three other config entries are enabled, this config is also enabled automatically.
You can search the options in the interactive kernel configuration menu but you have to build the menu first via make menuconfig, then type / followed by the term you're looking for. Each Symbol: in the search results is followed by the option name without CONFIG_ prefix. It also shows location of the option in the menu tree.
Some of the options are tristate: y - the feature is going to be built into the kernel image, m - the feature should reside in a loadable module, n - the feature is disabled.
You need to add CONFIG_BLK_DEV_NVME=m (either edit .config or use make menuconfig) to enable support of nvmeNnM block devices as a loadable module.
Related
Using menuconfig for nuttx development.
Was trying to do below for custom board setup:
if ARCH_BOARD_FOO
source "configs/FOO/Kconfig"
endif
Problem here, I would like to have some permission control for FOO directory. So not all users can see it.
However, kconfig language seems will always parse the file no matter the if condition is true or not. Therefore this causing make menuconfig could not open for users do not have permission of FOO directory.
Anyone know solution for that?
Try using a custom board configuration. Then your board directory can lie inside or outside of the NuttX source tree. In either case, it will not be visible to the configuration system. you would configure this like:
CONFIG_ARCH_BOARD_CUSTOM=y
CONFIG_ARCH_BOARD_CUSTOM_NAME="myboard"
CONFIG_ARCH_BOARD_CUSTOM_DIR="/home/users/me/myboard"
... and other options ...
In the above example, the board directory lies outside of the NuttX source tree and is an absolute path. The board configuration could also lie inside of the NuttX and the path may be specified as relative to the top-level directory with:
CONFIG_ARCH_BOARD_CUSTOM_DIR_RELPATH=y
For example:
CONFIG_ARCH_BOARD_CUSTOM_DIR="configs/FOO"
CONFIG_ARCH_BOARD_CUSTOM_DIR_RELPATH=y
Now if CONFIG_ARCH_BOARD_CUSTOM=y is not defined, there is no way to access /home/users/me/myboard or configs/FOO from the configuration system.
This works because the custom board Kconfig file will be linked to configs/dummy/Kconfig in the custom configuration with CONFIG_ARCH_BOARD_CUSTOM=y. Otherwise configs/dummy/Kconfig will be linked to an empty configuration to satisfy the configuration system.
In order to ignore the error, you could use:
if ARCH_BOARD_FOO
-source "configs/FOO/Kconfig"
endif
Introduction
I have a do_install task in a BitBake recipe which I've written for a driver where I execute a custom install script. The task fails because the installation script cannot find kernel source header files within <the image rootfs>/usr/src/kernel. This script runs fine on the generated OS.
What's Happening
Here's the relevant part of my recipe:
SRC_URI += "file://${TOPDIR}/example"
DEPENDS += " virtual/kernel linux-libc-headers "
do_install () {
( cd ${TOPDIR}/example/Install ; ./install )
}
Here's a relevant portion of the install script:
if [ ! -d "/usr/src/kernel/include" ]; then
echo ERROR: Linux kernel source include directory not found.
exit 1
fi
cd /usr/src/kernel
make scripts
...
./install_drv pci ${DRV_ARGS}
I checked changing to if [ ! -d "/usr/src/kernel" ], which also failed. install passes different options to install_drv, which I have a relevant portion of below:
cd ${DRV_PATH}/pci
make NO_SYSFS=${ARG_NO_SYSFS} NO_INSTALL=${ARG_NO_INSTALL} ${ARGS_HWINT}
if [ ${ARG_NO_INSTALL} == 0 ]; then
if [ `/sbin/lsmod | grep -ci "uceipci"` -eq 1 ]; then
./unload_pci
fi
./load_pci DEBUG=${ARG_DEBUG}
fi
The make target build: within ${DRV_PATH}/pci is essentially this:
make -C /usr/src/kernel SUBDIRS=${PWD} modules
My Research
I found these comments within linux-libc-headers.inc relevant:
# You're probably looking here thinking you need to create some new copy
# of linux-libc-headers since you have your own custom kernel. To put
# this simply, you DO NOT.
#
# Why? These headers are used to build the libc. If you customise the
# headers you are customising the libc and the libc becomes machine
# specific. Most people do not add custom libc extensions to the kernel
# and have a machine specific libc.
#
# But you have some kernel headers you need for some driver? That is fine
# but get them from STAGING_KERNEL_DIR where the kernel installs itself.
# This will make the package using them machine specific but this is much
# better than having a machine specific C library. This does mean your
# recipe needs a DEPENDS += "virtual/kernel" but again, that is fine and
# makes total sense.
#
# There can also be a case where your kernel extremely old and you want
# an older libc ABI for that old kernel. The headers installed by this
# recipe should still be a standard mainline kernel, not your own custom
# one.
I'm a bit unclear if I can 'get' the headers from the STAGING_KERNEL_DIR properly since I'm not using make.
Within kernel.bbclass provided in the meta/classes directory, there is this variable assigment:
# Define where the kernel headers are installed on the target as well as where
# they are staged.
KERNEL_SRC_PATH = "/usr/src/kernel"
This path is then packaged later within that .bbclass file here:
PACKAGES = "kernel kernel-base kernel-vmlinux kernel-image kernel-dev kernel-modules"
...
FILES_kernel-dev = "/boot/System.map* /boot/Module.symvers* /boot/config* ${KERNEL_SRC_PATH} /lib/modules/${KERNEL_VERSION}/build"
Update (1/21):
A suggestion on the yocto IRC channel was to use the following line:
do_configure[depends] += "virtual/kernel:do_shared_workdir"
which is corroborated by the Yocto Project Reference Manual, which states that in version 1.8, there was the following change:
The kernel build process was changed to place the source in a common shared work area and to place build artifacts separately in the source code tree. In theory, migration paths have been provided for most common usages in kernel recipes but this might not work in all cases. In particular, users need to ensure that ${S} (source files) and ${B} (build artifacts) are used correctly in functions such as do_configure and do_install. For kernel recipes that do not inherit from kernel-yocto or include linux-yocto.inc, you might wish to refer to the linux.inc file in the meta-oe layer for the kinds of changes you need to make. For reference, here is the commit where the linux.inc file in meta-oewas updated.
Recipes that rely on the kernel source code and do not inherit the module classes might need to add explicit dependencies on the do_shared_workdir kernel task, for example:
do_configure[depends] += "virtual/kernel:do_shared_workdir"
But I'm having difficulties applying this to my recipe. From what I understand, I should be able to change the above line to:
do_install[depends] += "virtual/kernel:do_shared_workdir"
Which would mean that the do_install task now must be run after do_shared_workdir task of the virtual/kernel recipe, which means that I should be able to work with the shared workdir (see Question 3 below), but I still have the same missing kernel header issue.
My Questions
I'm using a custom linux kernel (v3.14) from git.kernel.org. which inherits the kernel class. Here are some of my questions:
Shouldn't the package kernel-dev be a part of any recipe which inherits the kernel class? (this section of the variables glossary)
If I add the virtual/kernel to the DEPENDS variable, wouldn't that mean that the kernel-dev would be brought in?
If kernel-dev is part of the dependencies of my recipe, wouldn't I be able to point to the /usr/src/kernel directory from my recipe? According to this reply on the Yocto mailing list, I think I should.
How can I properly reference the kernel source header files, preferably without changing the installation script?
Consider your Environment
Remember that there are different environments within the the build time environment, consisting of:
sysroots
in the case of kernels, a shared work directory
target packages
kernel-dev is a target package, which you'd install into the rootfs of the target system for certain things like kernel symbol maps which are needed by profiling tools like perf/oprofile. It is not present at build time although some of its contents are available in the sysroots or shared workdir.
Point to the Correct Directories
Your do_install runs at build time so this is within the build directory structures of the build system, not the target one. In particular, /usr/src/ won't be correct, it would need to be some path within your build directory. The virtual/kernel do_shared_workdir task populates ${STAGING_DIR_KERNEL} so you would want to change to that directory in your script.
Adding a Task Dependency
The:
do_install[depends] += "virtual/kernel:do_shared_workdir
dependency like looks correct for your use case, assuming nothing in do_configure or do_compile accesses the data there.
Reconsider the module BitBake class
The other answers are correct in the recommendation to look at module.bbclass, since this illustrates how common kernel modules can be built. If you want to use custom functions or make commands, this is fine, you can just override them. If you really don't want to use that class, I would suggest taking inspiration from it though.
Task Dependencies
Adding virtual/kernel to DEPENDS means virtual/kernel:do_populate_sysroot must run before our do_configure task. Since you need a dependency for do_shared_workdir here, a DEPENDS on virtual/kernel is not enough.
Answer to Question 3
The kernel-dev package would be built, however it would then need to be installed into your target image and used at runtime on a real target. You need this at build time so kernel-dev is not appropriate.
Other Suggestions
You'd likely want the kernel-devsrc package for what you're doing, not the kernel-dev package.
I don't think anyone can properly answer that last question here. You are using a non-standard install method: we can't know how to interact with it...
That said, take a look at what meta/classes/module.bbclass does. It sets several related variables for make: KERNEL_SRC=${STAGING_KERNEL_DIR}, KERNEL_PATH=${STAGING_KERNEL_DIR}, O=${STAGING_KERNEL_BUILDDIR}. Maybe your installer supports some of these environment variables and you could set them in your recipe?
I seem to successfully build a kernel image, but I can not generate all the modules I expect. I expect more modules since I see them enabled in the gconfig window. Here is a copy of my make session. Seems like make goes into the devices directories. I can not figure out why it is not create the .ko files. I expect to see .ko files. I have checked the Makefile in /drivers directory, and I can see that it is configured with a number of lines like
obj-$(CONFIG_PCI) += pci/
Which directs make to build the pci module for instance. I think this implies that I should see a number .ko files. But I do not. I have seen just one .ko file for scsi module. I like to be able to build all of modules selected.
I also verified that a number of mudules are enabled when I issued:
make VARIANT_DEFCONFIG=msm8974_sec_hlte_spr_defconfig msm8974_sec_defconfig SELINUX_DEFCONFIG=selinux_defconfig gconfig
But as I said, I do not see any of them. What am I missing please?
#Subin - Thanks. I just tried make modules_install. I have to mention that I am cross compiling this for an arm target. I believe modules_install is for the purpose of installing the driver for the machine you are on? I got a message about needing to be in root, and I did not proceed. I have been wondering when I need to run it. What does it do exactly please?
Re: the make modules; I have run it before. I'll run it again and post the result. Since I got one .ko file I figured the issue is something different between that one module, and every other one enabled in my config. Here is what I got when I ran make modules:
sansari#ubuntu:~/WORKING_DIRECTORY$ make modules
CHK include/linux/version.h
CHK include/generated/utsrelease.h
make[1]: `include/generated/mach-types.h' is up to date.
CALL scripts/checksyscalls.sh
Building modules, stage 2.
MODPOST 1 modules
Re: your comment on the location of .ko files, I am doing a find to see if perhaps I am not looking at the right place, it only finds the one which was built. Not the other ones. Here is the output:
sansari#ubuntu:~/WORKING_DIRECTORY$ find . -type f -name "*.ko"
./drivers/scsi/scsi_wait_scan.ko
sansari#ubuntu:~/WORKING_DIRECTORY$
Should I perhaps run make v=1, in verbose mode that is? Would that provide more information on why the other modules are not built?
#Gil Hamilton - Thanks. You are right. Here is an excerpt of the .config file:
#
# SCSI support type (disk, tape, CD-ROM)
#
CONFIG_BLK_DEV_SD=y
# CONFIG_CHR_DEV_ST is not set
# CONFIG_CHR_DEV_OSST is not set
# CONFIG_BLK_DEV_SR is not set
CONFIG_CHR_DEV_SG=y
CONFIG_CHR_DEV_SCH=y
CONFIG_SCSI_MULTI_LUN=y
CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_LOGGING=y
CONFIG_SCSI_SCAN_ASYNC=y
CONFIG_SCSI_WAIT_SCAN=m
This entry is the only one set to 'm'.
Most device driver modules in the linux kernel build system use a tristate (3-valued) configuration setting. The options are
'n' (don't build at all),
'y' (build and link statically into the main kernel object), and
'm' (build as module for dynamic loading).
The values are determined by the content of .config. The values in .config are usually generated from an existing config file (look in arch/<ARCH>/configs for your <ARCH>). Also check the output of 'make help' for interesting configuration targets.
If you're not seeing the .ko files being created, that indicates the corresponding configuration variable is either set to 'y' or 'n'.
I am using 3.10.x kernel tree. My kernel module needs config VIDEOBUF2.
That is defined in drivers/media/v4l2-core/Kconfig:
# Used by drivers that need Videobuf2 modules
config VIDEOBUF2_CORE
select DMA_SHARED_BUFFER
tristate
So I put 'CONFIG_VIDEOBUF2_CORE=y' in my Kernel config file and
compile. From the Kconfig it has CONFIG_VIDEOBUF2_CORE has no
dependency and I think adding CONFIG_VIDEOBUF2_CORE=y to my kernel
config should work. I am modify the right kernel config file since I
set other flags like CONFIG_VIDEO_DEV=y and that works.
The generated .config does not contain 'CONFIG_VIDEOBUF2_CORE=y'
and the compilation fails with a bunch of
undefined reference to `vb2_buffer_done'
undefined reference to `vb2_buffer_done'
undefined reference to `vb2_buffer_done'
undefined reference to `vb2_buffer_done'
I really appreciate if someone can help me with this.
Thank you.
I cant directly comment on the subject as it requires 50 reputations to have this privilege. You can do : make ARCH = target_architecture CROSS_COMPILE = toolchain defconfig_file. This command execution will create a .config file in home directory of your kernel source. This file would contain default configuration for the peripherals on your target SOC ( I assume you have knowledge pertaining to defconfig files). Now if you wish to manipulate it and want to add your device support to it do : make menuconfig and you could add your device support by selecting configuration say like VIDEOBUF2_CORE in your case and then your kernel soure is ready to be compiled/cross-compiled. PS: Avoid editing .config file manually.
I've encountered a scenario where I'm building a Perl module as part of another Build system on a Windows machine. I use the --install_base option of Module::Build to specify a temporary directory to put module files until the overall build system can use them. Unfortunately, that other Build system has a problem if any of its files that it depends on are read only - it tries to delete any generated files before rebuilding them, and it can't clean any read-only files (it tries to delete it, and it's read only, which gives an error.) By default, Module::Build installs its libraries with the read-only bit enabled.
One option would be to make a new step in the build process that removes the read-only bit from the installed files, but due to the nature of the build tool that will require a second temporary directory...ugh.
Is it possible to configure a Module::Build based installer to NOT enable that read-only bit when the files are installed to the --install_base directory? If so, how?
No, it's not a configurable option. It's done in the copy_if_modified method in Module::Build::Base:
# mode is read-only + (executable if source is executable)
my $mode = oct(444) | ( $self->is_executable($file) ? oct(111) : 0 );
chmod( $mode, $to_path );
If you controlled the Build.PL, you could subclass Module::Build and override copy_if_modified to call the base class and then chmod the file writable. But I get the impression you're just trying to install someone else's module.
Probably the easiest thing to do would be to install a copy of Module::Build in a private directory, then edit it to use oct(666) (or whatever mode you want). Then invoke perl -I /path/to/customized/Module/Build Build.PL. Or, (as you said) just use the standard Module::Build and add a separate step to mark everything writable afterwards.
Update: ysth is right; it's ExtUtils::Install that actually does the final copy. copy_if_modified is for populating blib. But ExtUtils::Install also hardcodes the mode to read-only. You could use a customized version of ExtUtils::Install, but it's probably easier to just have a separate step.