Autoconf: check struct member type - linux-kernel

I am new to autoconf so I would ask you how could I check if a struct member is declared with a particular type.
For example I should check if struct posix_acl.a_refcount is declared as refcount_t and not atomic_t.
I know AC functions as ac_fn_c_check_decl and ac_fn_c_check_member, but none that accomplish this task.
Thank you!

Disclaimer: As there are no other answers at the time this answer is being written, this represents my best attempt to provide a solution, but you may need to adjust things to make it work for you. Caveat emptor.
You would need to use the AC_COMPILE_IFELSE macro with code that uses atomic_t, and if the compilation succeeds, then you're using atomic_t. As future-proofing, you might also add a test for refcount_t if the atomic_t test fails.
Example:
# _POSIX_ACL_REFCOUNT_T(type-to-check)
# ------------------------------------
# Checks whether the Linux kernel's `struct posix_acl'
# type uses `type-to-check' for its `a_refcount' member.
# Sets the shell variable `posix_acl_refcount_type' to
# `type-to-check' if that type is used, else the shell
# variable remains unset.
m4_define([_POSIX_ACL_REFCOUNT_T], [
AC_REQUIRE([AC_PROG_CC])
AC_MSG_CHECKING([whether struct posix_acl uses $1 for refcounts])
AC_COMPILE_IFELSE(
[AC_LANG_SOURCE(
[#include <uapi/../linux/posix_acl.h>
struct posix_acl acl;
$1 v = acl.a_refcount;]
)],
[AC_MSG_RESULT([yes])
AS_VAR_SET([posix_acl_refcount_type], [$1])],
[AC_MSG_RESULT([no])
)
])
_POSIX_ACL_REFCOUNT_T([atomic_t])
# If posix_acl_refcount_type isn't set, see if it works with refcount_t.
AS_VAR_SET_IF([posix_acl_refcount_type],
[],
[_POSIX_ACL_REFCOUNT_T([refcount_t])]
)
dnl
dnl Add future AS_VAR_SET_IF tests as shown above for the refcount type
dnl before the AS_VAR_SET_IF below, if necessary.
dnl
AS_VAR_SET_IF([posix_acl_refcount_type],
[],
[AC_MSG_FAILURE([struct posix_acl uses an unrecognized type for refcounts])]
)
AC_DEFINE([POSIX_ACL_REFCOUNT_T], [$posix_acl_refcount_type],
[The type used for the a_refcount member of the Linux kernel's posix_acl struct.])
The tests assume that you already have a variable containing the kernel source directory, and the kernel source's include directory is specified in CPPFLAGS or CFLAGS prior to attempting the tests. You can add more tests at the position indicated, and if the resulting posix_acl_refcount_type shell variable is still not defined after all those tests, then the final AS_VAR_SET_IF invocation will invoke AC_MSG_FAILURE to stop configure with the specified error message.
Note that I used <uapi/../linux/posix_acl.h> to specifically target the kernel's linux/posix_acl.h header rather than the userspace API uapi/linux/posix_acl.h header installed in a system's include directory with the uapi/ stripped off, which may result in the compile tests above failing due to the missing struct posix_acl in the userspace API. This may not work the way I'd expect and may need modification.

Related

Where should I start to debug when Make throws a particular error

My knowledge of Make is small. I have been told that everything you put after make (that does not contain "-") is a target.
Well a building process I have is failing.
First there is a line
make path/to/configuration_file
configuration_file is not a target. It is a autogenerated configuration file buried inside the directory structure ("path/to") that is of the form
#
# Boot Configuration
#
#
# DRAM Component
#
CONFIG_DRAM_TYPE_LPDDR4=y
# CONFIG_DRAM_TYPE_DDR4 is not set
CONFIG_DDR_SIZE=0x80000000
#
# Boot Device
#
# CONFIG_ENABLE_EMMC_BOOT is not set
# CONFIG_ENABLE_NAND_BOOT is not set
CONFIG_ENABLE_SPINAND_BOOT=y
# CONFIG_ENABLE_SPINOR_BOOT is not set
CONFIG_EMMC_ACCESS_8BIT=y
# CONFIG_EMMC_ACCESS_4BIT is not set
# CONFIG_EMMC_ACCESS_1BIT is not set
so I cannot understand how this is a target. For reference, when I run make there is a Makefile but this Makefile does not reference this file.
Still this line is going well.
The path where it fails says
make diags
and I have verified there is no "diags" target.
I will print here the error file that can give us more info of what is happening
GEN cortex_a/output/Makefile
Init diag test "orc_scheduler" ...
remoteconfig: Failed to generate configure in cortex_a/soc/visio/tests/orc_scheduler!
Makefile:11 recipe for target 'orc_scheduler-init' failed
make[10]: *** [orc_scheduler-init] Error 25
At least what I would like to know is how to interpret this error message. I don't know what the "11" or the "10" or the "25" refers to.
make is fundamentally a tool for automatically running commands in the right order so you don't have to type them in yourself. So all the commands make runs are commands that you could just type into your shell prompt. And all the errors that those commands generate are the same ones that you would see if you typed the command yourself. So, looking at make to try to understand those errors is looking in the wrong place: you have to look at the documentation for whatever command was invoked.
A "target" is just a file that make knows how to build. The fact that when you typed make <somefile> is didn't give you an error that it doesn't know how to build <somefile>, means that <somefile> is a target as far as your makefiles are concerned.
The error message Makefile:11: simply refers to the filename Makefile, line 11, which is where the command that make ran, that failed, can be found. But this likely won't help you solve the problem of why the command failed (unless the problem is you invoked it with the wrong arguments and you need to adjust the makefile to specify different arguments).
The command that failed generated the message:
remoteconfig: Failed to generate configure in cortex_a/soc/visio/tests/orc_scheduler!
I don't know what that means, but it's not related to make. You'll need to find out what this remoteconfig command is, what it does, and why it failed. It's unfortunate that it doesn't show any better error message as to why it failed to "generate configure", but again there's nothing make can do about that.
If you want to learn more about make you can look at the GNU make manual (note, GNU make is only one implementation of make; there are others and they are fundamentally the same but different in details).

How does "bitbake virtual/kernel" work if kernel recipes don't have PROVIDES variable set to virtual/kernel?

I'm trying to understand a few pieces associated with using bitbake to compile the linux image and generating a boot image that would be used to flash onto the processor.
How come bitbake virtual/kernel really works? Read through section 2.3 and it says recipes use PROVIDES parameter to add an extra provider, which indicates a recipe could be called in multiple ways (by its name, and by whatever PROVIDES is set to). But the kernel recipes (../poky/meta-bsp/recipes-kernel) I checked didn't have PROVIDES parameter let alone it being set to virtual/kernel.
Upon running bitbake virtual/kernel, how come a boot.img is being generated when it should only just be generating a linux binary i.e vmlinux for instance?
In one of the kernel .inc files, I see:
DEPENDS += " mkbootimg-native openssl-native kern-tools-native"
...
FILESPATH =+ "${WORKSPACE}:"
SRC_URI = "file://kernel \
${#bb.utils.contains('DISTRO_FEATURES', 'systemd', 'file://systemd.cfg', '', d)} \
${#bb.utils.contains('DISTRO_FEATURES', 'virtualization', 'file://virtualization.cfg', '', d)} \
${#bb.utils.contains('DISTRO_FEATURES', 'nand-squashfs', 'file://squashfs.cfg', '', d)} \
mkbootimg-native I reckon refers to the boot image recipe that the kernel recipe depends on, though shouldn't it be the other way around since the boot image should contain the kernel image itself?
lastly, is there a way to put debug prints in different recipe files to see if it's being invoked? I tried echo...to no avail
The recipes you checked probably had PROVIDES. Most if not all kernel recipes inherit kernel class (directly or via some other classes, such as kernel-yocto). The kernel.bbclass actually specifies PROVIDES for you, c.f. http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/classes/kernel.bbclass#n8).
boot.img does not seem to be created by default for any machine. After a quick glance at the code, it seems that this is created by wic for images inheriting the image-live bbclass or by adding live to IMAGE_FSTYPES, c.f. http://docs.yoctoproject.org/ref-manual/classes.html#image-live-bbclass.
From a simple git grep in the poky git repo, it seems only bootimg-efi.py is actually doing something with a boot.img which is called by wic when the -b or --bootimg-dir argument is passed, which is enforced when using wic. So probably the boot.img artifact is created only for EFI machines and images.
If you use echo or printf or similar shell functions (or print in Python tasks) in your tasks, you can only see them in ${WORKDIR}/temp/log.do_<task> of your recipe. Otherwise, you can use bbplain, bbnote, bbdebug, bbwarn, bberror or bbfatal. This will print to both the logs and the console (depending on your log level which is configurable with -D (the more Ds, the higher the log level)).

How to add compile option for ModelSim using VUnit?

Using ModelSim and VUnit I try to compile some UVVM, but this gives some warnings like:
** Warning: C:\work\Qtec\SVN_sim\Design\uvvm\uvvm_util\src\methods_pkg.vhd(1159): (vcom-1346) Default expression of interface object is not globally static.
So I would like to suppress these warnings, so I tried updating the VUnit "run.py" file with add_compile_option based on VUnit Python Interface:
uvvm_util = prj.add_library('uvvm_util')
uvvm_util.add_source_files(join(root, '../../uvvm/uvvm_util/src/*.vhd'))
uvvm_util.add_compile_option('modelsim.vcom_flags', ['-suppress 1346'])
But when compiling, I then get the error:
Compiling ....\uvvm\uvvm_util\src\types_pkg.vhd into uvvm_util ...
** Error (suppressible): (vcom-1902) Option "-suppress 1346" is either unknown, requires an argument, or was given with a bad argument.
You could edit the suppress entry in the modelsim.ini file. source
It could be a python/TCL error with spaces. See this link.
So the space between -suppress and 1346 is not properly forwarded.
The VUnit ui.py shows
modelsim.vcom_flags
Extra arguments passed to ModelSim vcom command.
Must be a list of strings.
I cannot test it, but this case the line should possibly be:
uvvm_util.add_compile_option('modelsim.vcom_flags', ['-suppress', '1346'])
edit: after some reading... To me the difference between add_compile_option and set_compile_option is not clear. Maybe you could try the other?

Find dead code in Golang monorepo

My team has all our Golang code in a monorepo.
Various package subdirectories with library code.
Binaries/services/tools under cmd
We've had it for a while and are doing some cleanup. Are there any tools or techniques that can find functions not used by the binaries under cmd?
I know go vet can find private functions that are unused in a package. However I suspect we also have exported library functions that aren't used either.
UPD 2020: The unused tool has been incorporated into staticcheck.
Unfortunately, v0.0.1-2020.1.4 will probably be the last to support this
feature. Dominik explains that it is because the check consumes a lot of
resources and is hard to get right.
To get that version:
env GO111MODULE=on go get honnef.co/go/tools/cmd/staticcheck#v0.0.1-2020.1.4
To use it:
$ staticcheck --unused.whole-program=true -- ./...
./internal/pkg/a.go:5:6: type A is unused (U1001)
Original answer below.
Dominik Honnef's unused tool might be what you're looking for:
Optionally via the -exported flag, unused can analyse all arguments as a
single program and report unused exported identifiers. This can be useful for
checking "internal" packages, or large software projects that do not export
an API to the public, but use exported methods between components.
Try running go build -gcflags -live. This passes the -live flag to the compiler (go tool compile), instructing it to output debugging messages about liveness analysis. Unfortunately, it only prints when it's found live code, not dead code, but you could in theory look to see what doesn't show up in the output.
Here's an example from compiling the following program stored in dead.go:
package main
import "fmt"
func main() {
if true {
fmt.Println(true)
} else {
fmt.Println(false)
}
}
Output of go build -gcflags -live:
# _/tmp/dead
./dead.go:7: live at call to convT2E: autotmp_5
./dead.go:7: live at call to Println: autotmp_5
If I'm reading this correctly, the second line states that the implicit call to convT2E (which converts non-interface types to interface types, since fmt.Println takes arguments of type interface{}) is live, and the third line states that the call to fmt.Println is live. Note that it doesn't say that the fmt.Println(false) call is live, so we can deduce that it must be dead.
I know that's not a perfect answer, but I hope it helps.
It is a bit dirty, but it works for me.
I had a lot of structs which I did not want to test manually, so I wrote a script that renames the struct then runs all the tests (ci/test.sh) and renames it back if any test failed:
#!/bin/sh
set -e
git grep 'struct {' | grep type | while read line; do
file=$(echo $line | awk -F ':' '{print $1}')
struct=$(echo $line | awk '{print $2}')
sed "s/$struct struct/_$struct struct/g" -i $file
echo "testing for struct $struct changed in file $file"
if ! ./ci/test.sh; then
sed "s/_$struct struct/$struct struct/g" -i $file
fi
done
It's not an open source solution, but it works.
If you guys are using Goland, you should consider using its code-inspections feature, includes useful features:
Reports unused constant
Reports unused exported functions
Reports unused exported types in the main package and in tests
Reports unused unexported functions
Reports global variables that are defined but are never used in code
Reports unused function parameters
Reports unused types
(It looks like the implementation of this feature is black box, jetbrains does not open source this feature)
Go-related detection tools seem to place more emphasis on accuracy, and they want to do their best to minimize error reporting. And using Goland's code-inspections feature may require more self-judgment. :)
Interests: Paid users only, not working for Jetbrains, simply think this feature works well.
A reliable but inelegant method I've used is to rename or comment out functions you suspect might not be used and then recompile everything -- no errors means you didn't need them.
If they are needed, it shows you where these functions are called so it's good for getting familiar with a code base and seeing how things connect.

Setting kernel tunable parameter

As I wanted to introduce new kernel module parameter say new_param=1 /0 ,then after that parameter has to be checked inside kernel code as
if (new_param==1)
do some work.....
else
do other...
In this way I wanted to check by introducing new kernel module parameter.Can anyone please help me? What are the steps I need to follow to do this ?
One way to use a custom kernel parameter is to add it to the kernel command line and parse it out of /proc/cmdline, i.e.:
Add the parameter to the kernel command line
BOOT_IMAGE=<image> root=<root> ro quiet splash vt.handoff=7 your_parameter=<value>
When you boot, you will be able to access this parameter by parsing the contents of /proc/cmdline:
$ cat /proc/cmdline
BOOT_IMAGE=<image> root=<root> ro quiet splash vt.handoff=7 your_parameter=<value>
I believe a solution more tailored to your needs would include using __setup(), which is mentioned (but not explained well) in /Documentation/kernel-parameters.txt.
There are quite a few examples in the kernel source. One of such would be in /drivers/block/brd.c:
#ifndef MODULE
/* Legacy boot options - nonmodular */
static int __init ramdisk_size(char *str)
{
rd_size = simple_strtol(str, NULL, 0);
return 1;
}
__setup("ramdisk_size=", ramdisk_size);
#endif
Following this example, you could add your __init and __setup() in the relevant source file. For parsing integers from an option string in your __init function, see get_option() in /lib/cmdline.c
Update
For modules, you should use module_param(), which takes three arguments: variable name, variable type, sysfs permissions. More on the class is found at /linux/moduleparam.h.
In the module that you want to be able to pass parameters to, first declare the parameters as globals. An example usage would include the following in the module source:
int foo = 200;
module_param(foo, int, 0);
Recompile the module and you will see that you can load it via modprobe <module-name> foo=40.

Resources