Checking local filesystem fsid from osx kernel extension (kext) - macos

I've got WA for checking what is the local fsid from within kext context, simply by reading predefined local file status.
static inline uint64_t get_fsid(const vfs_context_t ctx, const vnode_t vp) {
struct vnode_attr vap;
VATTR_INIT(&vap);
VATTR_WANTED(&vap, va_fsid);
vnode_getattr(vp, &vap, ctx);
return (uint64_t)vap.va_fsid;
}
another option is to calculate the fsid from user-space and pass this info to the driver (using getmntinfo)
However, I prefer getting this data from directly from the kernel space without relying on any files currently existed. is there any KPI to support this request ?

You can iterate over all mount points in the system using the function
int vfs_iterate(int, int (*)(struct mount *, void *), void *);
For each mount object, you can check its fsid using
struct vfsstatfs * vfs_statfs(mount_t);
vfsstatfs has an f_fsid field.
Both functions and the struct are declared and documented in <sys/mount.h>. The functions are exported in the BSD KPI.

Related

What is the purpose of the function "blk_rq_map_user" in the NVME disk driver?

I am trying to understand the nvme linux drivers. I am now tackling the function nvme_user_submit_cmd, which I report partially here:
static int nvme_submit_user_cmd(struct request_queue *q,
struct nvme_command *cmd, void __user *ubuffer,
unsigned bufflen, void __user *meta_buffer, unsigned meta_len,
u32 meta_seed, u32 *result, unsigned timeout)
{
bool write = nvme_is_write(cmd);
struct nvme_ns *ns = q->queuedata;
struct gendisk *disk = ns ? ns->disk : NULL;
struct request *req;
struct bio *bio = NULL;
void *meta = NULL;
int ret;
req = nvme_alloc_request(q, cmd, 0, NVME_QID_ANY);
[...]
if (ubuffer && bufflen) {
ret = blk_rq_map_user(q, req, NULL, ubuffer, bufflen,
GFP_KERNEL);
[...]
The ubufferis a pointer to some data in the virtual address space (since this comes from an ioctl command from a user application).
Following blk_rq_map_user I was expecting some sort of mmap mechanism to translate the userspace address into a physical address, but I can't wrap my head around what the function is doing. For reference here's the call chain:
blk_rq_map_user -> import_single_range -> blk_rq_map_user_iov
Following those function just created some more confusion for me and I'd like some help.
The reason I think that this function is doing a sort of mmap is (apart from the name) that this address will be part of the struct request in the struct request queue, which will eventually be processed by the NVME disk driver (https://lwn.net/Articles/738449/) and my guess is that the disk wants the physical address when fulfilling the requests.
However I don't understand how is this mapping done.
ubuffer is a user virtual address, which means it can only be used in the context of a user process, which it is when submit is called. To use that buffer after this call ends, it has to be mapped to one or more physical addresses for the bios/bvecs. The unmap call frees the mapping after the I/O completes. If the device can't directly address the user buffer due to hardware constraints then a bounce buffer will be mapped and a copy of the data will be made.
Edit: note that unless a copy is needed, there is no kernel virtual address mapped to the buffer because the kernel never needs to touch the data.

GCC-ARM call function of binary from other one

I have a main binary and an app binary. Main binary is compiled with FreeRTOS and has access to HAL layer and thus uart.
App binary is loaded at runtime. Now from App binary I need to call a uart_print function of main binary to log the message from uart. Apart from this also I need to call other function of main binary from app binary.
I searched on web and found How to write dynamic loader for bare-metal arm-application which suggest implementing jump tables.
I have the following implementation:
jumptbl.h
typedef struct _MyAPI
{
void (*jumptbl_msg)(const char *msg);
} MyAPI;
In main binary I have instantiate the structure:
void PrintMsg(const char* msg)
{
HAL_UART_Transmit(&huart3, (uint8_t*)'\n', 1,10);
}
__attribute__ ((section (".jumptbl"))) MyAPI main_API =
{
&PrintMsg,
};
In linker script I create a section to be placed at address :0x20001F00
.jumptbl_block 0x2001F000:
{
KEEP(*(.jumptbl))
} > RAM
And then from app binary I call the PrintMsg function.
MyAPI *pAPI = (MyAPI*)(0x2001F000);
pAPI->jumptbl_msg("hello world");
But my program hardfaults when the jump function is called.
Also, I tried another approach. I got the address of PrintMsg using arm-none-eabi-nm and directly calling it, but again the program hard faulted.
typedef void (*t_funcPtr)(const char *);
t_funcPtr MyFunc = (t_funcPtr)0x08001af4;
MyFunc("hello world");
Please can you suggest how can I call function of one binary in section sec_x loaded at address x from another binary.

Create array of struct scatterlist from buffer

I am trying to build an array of type "struct scatterlist", from a buffer pointed by a virtual kernel address (I know the byte size of the buffer, but it may be large). Ideally I would like to have function like init_sg_array_from_buf:
void my_function(void *buffer, int buffer_length)
{
struct scatterlist *sg;
int sg_count;
sg_count = init_sg_array_from_buf(buffer, buffer_length, sg);
}
Which function in the scatterlist api, does something similar? Currently the only possibility I see, is to manually determine the amount of pages, spanned by the buffer. Windows has a kernel macro called "ADDRESS_AND_SIZE_TO_SPAN_PAGES", but I didn't even manage to find something like this in the linux kernel.

How to attach file operations to sysfs attribute in platform driver?

I wrote a platform driver for a peripheral we developed and would like to expose some configuration options to the sysfs. I have managed to create the appropriate files using attribute structs (see below) and sysfs_create_file in the probe function, but I can't figure out how to attach the show/store functions to the structs in a platform driver.
Most resources I found online used a device_attribute struct or something similar to create their files, is that also appropriate here? Is there another way to do this for a platform driver?
My attribute struct looks like this:
struct attribute subkey_attr = {
.name = "subkeys",
.mode = S_IWUGO | S_IRUGO,
};
And I register the file using this call:
riddler_kobject = &pdev->dev.kobj;
ret_val = sysfs_create_file(riddler_kobject, &subkey_attr);
It boils down to next:
reuse existing kobject from struct device (from your struct platform_device) for sysfs_create_group() (instead of creating your own kobject)
use DEVICE_ATTR() to declare struct device_attribute instead of regular __ATTR(), which creates struct kobj_attribute.
Here is how I created sysfs attributes for my platform driver.
Create structure you'll be using as private data in show() / store() operations for your sysfs attribute (file). For example:
struct mydrv {
struct device *dev;
long myparam;
};
Allocate this structure in your driver's probe():
static int mydrv_probe(struct platform_device *pdev)
{
struct mydrv *mydrv;
mydrv = devm_kzalloc(&pdev->dev, sizeof(*mydrv), GFP_KERNEL);
mydrv->dev = &pdev->dev;
platform_set_drvdata(pdev, mydrv);
...
}
Create show() / store() functions:
static ssize_t mydrv_myparam_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct mydrv *mydrv = dev_get_drvdata(dev);
int len;
len = sprintf(buf, "%d\n", mydrv->myparam);
if (len <= 0)
dev_err(dev, "mydrv: Invalid sprintf len: %d\n", len);
return len;
}
static ssize_t mydrv_myparam_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
struct mydrv *mydrv = dev_get_drvdata(dev);
kstrtol(buf, 10, &mydrv->myparam);
return count;
}
Create device attribute for those functions (right after those functions):
static DEVICE_ATTR(myparam, S_IRUGO | S_IWUSR, mydrv_myparam_show,
mydrv_myparam_store);
Declare attributes table (listing in fact sysfs files for you driver):
static struct attribute *mydrv_attrs[] = {
&dev_attr_myparam.attr,
NULL
};
Declare attribute group (specifying in fact sysfs directory for your driver):
static struct attribute_group mydrv_group = {
.name = "mydrv",
.attrs = mydrv_attrs,
};
static struct attribute_group *mydrv_groups[] = {
&mydrv_group,
NULL
}
which can be actually replaced with one line:
ATTRIBUTE_GROUPS(mydrv);
Create sysfs directory and files in your driver's probe() function:
static int mydrv_probe(struct platform_device *pdev)
{
int ret;
...
ret = sysfs_create_group(&pdev->dev.kobj, &mydrv_group);
if (ret) {
dev_err(&pdev->dev, "sysfs creation failed\n");
return ret;
}
...
}
Remove your sysfs files in your driver's remove() function:
static int mydrv_remove(struct platform_device *pdev)
{
sysfs_remove_group(&pdev->dev.kobj, &mydrv_group);
...
}
Race condition note
As #FranzForstmayr correctly pointed out, there may be race condition when adding sysfs files with sysfs_create_group() in mydrv_probe(). That's because user-space can be already notified that those files exist before mydrv_probe() called (where those files are actually being created by sysfs_create_group() function). This issue covered in details in "How to Create a sysfs File Correctly" article by Greg Kroah-Hartman.
So in our case of platform_device, instead of calling sysfs_create_group() (and its counterpart sysfs_remove_group()), you can use default attribute group. To do so, you need to assign corresponding .groups field of your struct device to your attribute groups variable:
static int mydrv_probe(struct platform_device *pdev)
{
...
pdev->dev.groups = mydrv_groups;
...
}
DISCLAIMER: I didn't test this code, though it should work, because of this code.
See [1,2,3] links for more insights on mentioned race condition.
For more examples, run next command in kernel source directory:
$ git grep -l --all-match -e platform_device -e attribute -e '\.groups =' -- drivers/
Also you can search by "default attribute" in commit messages:
$ git log --no-merges --oneline --grep="default attribute" -- drivers/
Some commits I found this way: [4,5,6,7].
References
[1] My attributes are way too racy, what should I do?
[2] PATCH: sysfs: add devm_sysfs_create_group() and friends
[3] [GIT PATCH] Driver core patches for 3.11-rc2
[4] commit 1
[5] commit 2
[6] commit 3
[7] commit 4
Not enough reputation to post a comment, but I just want to comment on the default attribute group note from the accepted answer.
My understanding is that this should not be added in the probe function, as given in the example, but instead should be set in the device struct, (or device_driver, class, or bus depending on your driver) usually defined at the end of your file.
For example:
static struct device iio_evgen_dev = {
.bus = &iio_bus_type,
.groups = iio_evgen_groups,
.release = &iio_evgen_release,
};
from this example
Strangely, according to this it doesn't work correctly when using DEVICE_INT_ATTR to create the attribute, so not sure what that's all about.
Also, I'm not 100% sure, but I think that this is invoked when the driver is loaded, not when the device is probed.
This is an addition to Sam Protsenko's and Anthony's answers
If you create device attributes via the DEVICE_ATTR macros then you have to register the attribute groups (mydrv_groups) in the .dev_groups instead of the .groups field.
static struct device iio_evgen_dev = {
.bus = &iio_bus_type,
.dev_groups = iio_evgen_groups, // .dev_groups for DEVICE_ATTR
.groups = another_attr_group, // .groups for DRIVER_ATTR
.release = &iio_evgen_release,
};
Then the attributes are automatically registered correctly without setting up anything in the probe/remove functions, as described in Greg Kroah-Hartman's article.
Assume that the module has been loaded into the kernel and the driver is registered in
/sys/bus/platform/drivers/mydrv
every device instances will be a subdirectory of the driver's folder like
/sys/bus/platform/drivers/mydrv/mydrv1
/sys/bus/platform/drivers/mydrv/mydrv2
Registering attributes in the .groups field creates the attributes in the driver folder.
Registering attributes in the .dev_groups field creates the attributes in the device's instance folder.
mydrv
├── driver_attr1
├── driver_attr2
└── mydrv1
├── device_attr1
└── device_attr2
The show/store functions of the attributes in the .groups field do not have access to the driver data set via platform_set_drvdata(pdev, mydrv).
At least not by accessing it via dev_get_drvdata(dev).
Accessing the driver data via dev_get_drvdata(dev) returns NULL and dereferencing it will result in a kernel oops.
In turn, he show/store functions of the attributes in the .dev_groups field have access to the driver data via
struct mydrv *mydrv = dev_get_drvdata(dev);

Linux Char Driver

Can anyone tell me how a Char Driver is bind to the corresponding physical device?
Also, I would like to know where inside a char driver we are specifying the physical device related information, which can be used by kernel to do the binding.
Thanks !!
A global array — bdev_map for block and cdev_map for character devices — is used to implement a hash table, which employs the device major number as hash key.
while registering for char driver following calls get in invoked to get major and minor numbers.
int register_chrdev_region(dev_t from, unsigned count, const char *name)
int alloc_chrdev_region(dev_t *dev, unsigned baseminor, unsigned count,
const char *name);
After a device number range has been obtained, the device needs to be activated by adding it to the character device database.
void cdev_init(struct cdev *cdev, const struct file_operations *fops);
int cdev_add(struct cdev *p, dev_t dev, unsigned count);
Here on cdev structure initialize with file operation and respected character device.
Whenever a device file is opened, the various filesystem implementations invoke the init_special_inode function to create the inode for a block or character device file.
void init_special_inode(struct inode *inode, umode_t mode, dev_t rdev)
{
inode->i_mode = mode;
if (S_ISCHR(mode)) {
inode->i_fop = &def_chr_fops;
inode->i_rdev = rdev;
} else if (S_ISBLK(mode)) {
inode->i_fop = &def_blk_fops;
inode->i_rdev = rdev;
}
else
printk(KERN_DEBUG "init_special_inode: bogus i_mode (%o)\n",
mode);
}
now the default_chr_fpos chrdev_open() method will get invoked. which will look up for the inode->rdev device in cdev_map array and will get a instance of cdev structure. with the reference to cdev it will bind the file->f_op to cdev file operation and invoke the open method for character driver.
In a character driver like I2C client driver, We specify the slave address in the client structure's "addr" field and then call i2c_master_send() or i2c_master_receive() on this client . This calls will ultimately go to the main adapter controlling that line and the adapter then communicates with the device specified by the slave address.
And the binding of drivers operations is done mainly with cdev_init() and cdev_add() functions.
Also driver may choose to provide probe() function and let kernel find and bind all the devices which this driver is capable of supporting.

Resources