Linux kernel hashtable struct hlist_head - linux-kernel

Can anybody tell me the location where kernel hashtable struct hlist_head and struct hlist_node is defined in the linux kernel? I searched in free-electrons.com but couldn't get hold of the defination.

You can find it in types.h
http://lxr.free-electrons.com/source/include/linux/types.h#L189
It's also in a few other places. Check Linux Cross Reference.
http://lxr.free-electrons.com/ident?i=hlist_head

Related

How do compilers handle records and unions? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
How does a compiled C++ class look like?
Hi all,
bash$cat struct.c
struct test
{
int i;
float f;
};
bash$gcc -c struct.c
The object file struct.o is of elf format. I am trying to understand what does this object file contain. The source code is just a definition of a struct. There is nothing executable here so there should be nothing in text, and there is no data really either.
So where does the definition of struct go really?
I tried using;
readelf -a struct.o
objdump -s struct.o
but don't quite understand this.
Thanks,
Jagrati
So where does the definition of struct
go really?
Struct definition usually goes to /dev/null. C does not have any introspection features, so struct definition is not needed at run time. During compilation, calls to struct fields are converted to numeric offsets, eg. x->f would be compiled to equivalent of *((void*)x + sizeof(int)). That's why you need to include headers every time you use struct.
There is nothing. It does not exist. You have created nothing and used nothing.
The definition of the struct is used at compile time. That definition would normally be placed in a non-compiled header file. It is when a struct is used that some code is generated. The definition affects what the compiler produces at that point.
This, among other reasons, is why compiling against one version of a library and then using another version at runtime can crash programs.
structs are not compiled, they are declared. Functions get compiled though.
I'm not an expert and I can't actually answer the question... But I thought of this.
Memory is memory: if you use 1 byte as integer or char, it is still one byte. The results depends only on the compiler.
So, why can't be the same for structs? I mean, the compiler probably will calculate the memory to allocate (as your computer probably will allocate WORDS of memory, not bytes, if your struct is 1 byte long, probably 3 bytes will be added allowing the allocation of 4 bytes word), and then struct will just be a "reference" for you when accessing data.
I think that there is no need to actually HAVE something underneath: it's sufficient for the compiler to know that, in compile time, if you refer to field "name" of your struct, it shall treat is as an array of chars of length X.
As I said, I'm not expert in such internals, but as I see it, there is no need for a struct to be converted in "real code"... It's just an annotation for the compiler, which can be destroyed after the compilation is done.

What To Do With cdev_init When Converting file_operations To proc_ops?

According to this question Passing argument 4 of ‘proc_create’ from incompatible pointer type
You have to use proc_ops instead of file_operations structs on newer kernels.
How should we handle initializing the cdev with cdev_init when it uses the old file_operations structs?
I have looked through some examples on newer linux kernels (https://elixir.bootlin.com/linux/latest/source/drivers/char/pcmcia/scr24x_cs.c#L216) but they still use the old file_operations.
Thanks,
-Special K
Signature of cdev_init() function still uses argument pointed to struct file_operations object, so you needn't to change this call when adapt your module for newer kernels.
Signature for proc_create() function has been changed because it works with files under proc filesystem (procfs), and this filesystem has been re-designed in the newer kernels.
cdev_init function creates (initializes) a character device. While character devices are usually represented by a file (which could also be located under procfs), this file is of special type, and operations of this files are not the same as operations of character device itself. Because cdev_init is only responsible for character device (not a file), it is unaffected by changings in procfs design.

System calls with struct parameters (Linux)

How is it that certain System calls take pointers to structs as arguments? If these structs are defined in the kernel, then how can user programs create instances of them?
There is no magic here. The struct types used in syscalls and meant to be user-createable are declared in header files, just as the syscalls themselves are. Take stat(2):
int stat(const char *path, struct stat *buf);
You get the declaration of struct stat (on Linux) by including sys/stat.h.
Some types are not meant to be directly declared by client code, however. In comments you mentioned semaphores, and sem_t is an example of such. The user header provides only an incomplete declaration, so you can't create an instance directly. This is intentional. In those cases there will be a call that creates an instance and returns a pointer to it, for example:
sem_t *sem_open(const char *name, int oflag);
You are expected to provide that same pointer as an argument to subsequent syscalls, even though you can't dereference it yourself (because its declaration is incomplete). The distinction between structs and struct pointers is extremely important here.
Every time , when you create a new structure inside the kernel , it can be exported to userspace by executing "make headers_install".
So if the user space binary is built in the same machine , it would be having identical copy of the header files(typically in /usr/include).Hence system calls can specify pointers to structures as parameters.

What is the correct jmp_buf size?

I got a library compiled wit GCC for the ARM Cortex-M3 processor compiled as static lib. This library has a jmp_buf at its interface.
struct png_struct_def {
#ifdef PNG_SETJMP_SUPPORTED
jmp_buf jmpbuf;
#endif
png_error_ptr error_fn;
// a lot more members ...
};
typedef png_struct_def png_struct;
When I pass a png_struct address to the library function it stores the value not to error_fn but the last field of jmpbuf. Obviously it has been compiled with another assumptions of the size of a jmp_buf.
Both compiler versions are arm-none-eabi-gcc. Why is the code incompatible. And what is the "correct" jmp_buf size? I can see in the disassembly that only the first half of the jmp_buf is used. Why does the size changes between the versions of GCC when it is too large anyway?
Edit:
The library is compiled with another library that I can't recompile because the source code is not available. This other library uses this interface. So I can't change the structure of the interface.
You may simply re-order the data declaration. I would suggest the following,
typedef struct if {
int some_value;
union
{
jmp_buf jmpbuf;
char pad[511];
} __attribute__ ((__transparent_union__));
} *ifp;
The issue is that depending on the ARM library, different registers maybe saved. At a maximum, 16 32bit general purpose registers and 32 64bit NEON registers might be saved. This gives around 320 bytes. If you struct is not used many times, then you can over-allocate. This should work no matter which definition of jmp_buf you get.
If you can not recompile the library, you may try to use,
typedef struct if {
char pad[LIB_JMPBUF_SZ];
int some_value;
} *ifp;
where you calculated the jmp_buf size. The libc may have changed the definition of jmp_buf between versions. Also, even though the compiler names match, one may support floating point and another one does not, etc. Even if the versions match, it is conceivable that the compiler configuration can give different jmp_buf sizes.
Both suggestion are non-portable. The 2nd suggestion will not work if your code calls setjmp() or longjmp(). Ie, I assume that the library is using these functions and the caller allocates the space.

how can shared library get its own base address

I have the offset address's of all symbols (obtained with libelf executing on its own binary .so). Now, at runtime, I would need to calculate the absolutue address's of all those symbols and for that I would need to get the base address (where the shared library is loaded) and do a calculation:
symbol_address = base_address + symbol_offset
How can a shared lib get its own base address? On Windows I would use the parameter passed to DllMain, is there some equivalent in linux?
On Linux, dladdr() on any symbol from libfoo.so will give you
void *dli_fbase; /* Load address of that object */
More info here.
Alternatively, dl_iterate_phdr can give you load address of every ELF image loaded into current process.
Both are GLIBC extensions. If you are not using GLIBC, do tell what you are using, so more appropriate answer can be given.
This is an old question, but still relevant.
I found this example code from ubuntu to be very useful. It will print all your shared libraries and their segments.
http://manpages.ubuntu.com/manpages/bionic/man3/dl_iterate_phdr.3.html
After some research I managed to find out the method of discovering the address of the library loading by its descriptor, which is returned by the dlopen() function. It is performed with the help of such macro:
#define LIBRARY_ADDRESS_BY_HANDLE(dlhandle) ((NULL == dlhandle) ? NULL : (void*)*(size_t const*)(dlhandle))

Resources