I am trying to write a simple routine that will change the operating channel of the wireless device.
So far, I have:
/* These are function arguments */
struct ieee80211_local *local;
struct ieee80211_sub_if_data *sdata;
/* Declare a struct for the new channel */
struct cfg80211_chan_def new_channel;
/* Testing with 5765MHz */
new_channel.center_freq1 = 5765;
new_channel.center_freq2 = 0;
new_channel.chan = ieee80211_get_channel(sdata->local->hw.wiphy, 5765);
local->_oper_chandef = new_channel;
/* Reconfigure hardware */
ieee80211_hw_config(local, IEEE80211_CONF_CHANGE_CHANNEL);
However, I get plenty of Kernel level warnings due to invalid channel configuration, plus the channel does not change.
Hardware/software specifications:
Fujitsu Lifebook S
Atheros wireless card (ath9k)
Linux kernel 3.14 (mptcp-0.89, to be more precise)
Related
I am working on netlink sockets, written code for application and kernel module. Kernel module will send notifications regularly to user space application. If application gets killed, kernel module doesn't stop sending notification. How the kernel will know when application gets killed? Can we user bind and unbind in netlink_kernel_cfg for this purpose? I have searched a lot but didn't find any information on this.
When any application is killed in Linux, all file descriptors (including socket descriptors) are closed. In order to have your kernel module notified when the application closes the netlink socket you need to implement the optional .unbind op in struct netlink_kernel_cfg.
include/linux/netlink.h
/* optional Netlink kernel configuration parameters */
struct netlink_kernel_cfg {
unsigned int groups;
unsigned int flags;
void (*input)(struct sk_buff *skb);
struct mutex *cb_mutex;
int (*bind)(struct net *net, int group);
void (*unbind)(struct net *net, int group);
bool (*compare)(struct net *net, struct sock *sk);
};
Set the configuration parameters in your module:
struct netlink_kernel_cfg cfg = {
.unbind = my_unbind,
};
netlink = netlink_kernel_create(&my_netlink, ... , &cfg);
To understand how this is used, note the following fragment from the netlink protocol family proto_ops definition:
static const struct proto_ops netlink_ops = {
.family = PF_NETLINK,
.owner = THIS_MODULE,
.release = netlink_release,
.release is called upon close of the socket (in your case, due to the application being killed).
As a part of its cleanup process, netlink_release() has the following implementation:
net/netlink/af_netlink.c
if (nlk->netlink_unbind) {
int i;
for (i = 0; i < nlk->ngroups; i++)
if (test_bit(i, nlk->groups))
nlk->netlink_unbind(sock_net(sk), i + 1);
Here you can see that if the optional netlink_unbind has been provided, then it will be executed, providing you a callback when your application has closed the socket (either gracefully or as a result of being killed).
I am currently writing a device driver for Linux for use of PowerPC.
The device tree entry is as follows:
// PPS Interrupt client
pps_hwirq {
compatible = "pps-hwirq";
interrupts = <17 0x02>; // IPIC 17 = IRQ1, 0x02 = falling edge
interrupt-parent = < &ipic >;
};
The 0x02 flag is quite important - the PPS is aligned with the falling edge, but this is not universal on GPS receivers and therefore should be configurable.
In the probe() function of the driver, obtaining the IRQ number is straightforward:
hwirq = irq_of_parse_and_map(np, 0);
if (hwirq == NO_IRQ) {
dev_err(&pdev->dev, "No interrupt found in the device tree\n");
return -EINVAL;
}
But how does one map the the IRQ flags from the device tree to the driver?
/* ****TODO****: Get the interrupt flags from the device tree
* For now, hard code to suit my problem, but since this differs
* by GPS receiver, it should be configurable.
*/
flags = IRQF_TRIGGER_FALLING;
/* register IRQ interrupt handler */
ret = devm_request_irq(&pdev->dev, data->irq, pps_hwint_irq_handler,
flags, data->info.name, data);
Unfortunately, there are few - if any - examples in the tree that actually do this job - most leave this flag as 0 (leave as-is) - here's a snippet of the results when grep for devm_request_irq, noting the values for the flags:
./drivers/crypto/mxs-dcp.c: ret = devm_request_irq(dev, dcp_vmi_irq, mxs_dcp_irq, 0,
./drivers/crypto/mxs-dcp.c: ret = devm_request_irq(dev, dcp_irq, mxs_dcp_irq, 0,
./drivers/crypto/omap-sham.c: err = devm_request_irq(dev, dd->irq, dd->pdata->intr_hdlr,
./drivers/crypto/omap-aes.c: err = devm_request_irq(dev, irq, omap_aes_irq, 0,
./drivers/crypto/picoxcell_crypto.c: if (devm_request_irq(&pdev->dev, irq->start, spacc_spacc_irq, 0,
Or hard code it to what the hardware actually asserts:
./drivers/crypto/tegra-aes.c: err = devm_request_irq(dev, dd->irq, aes_irq, IRQF_TRIGGER_HIGH |
So how does one cleanly associate this property from the device tree to the actual driver?
Further I'm gonna show how to obtain IRQ number and IRQ flags from Device Tree in some common cases:
in I2C drivers
in platform drivers
manually
In I2C drivers
In short
If you're writing an I2C driver, you don't need to read IRQ parameters from DT manually. You can rely on I2C core to populate IRQ parameters for you:
in your probe() function, client->irq will contain the IRQ number
devm_request_irq() will use IRQ flags from DT automatically (just don't pass any IRQ trigger flags to that function).
Details
Let's look at the i2c_device_probe() function (it's where your driver's probe() function is being called from):
static int i2c_device_probe(struct device *dev)
{
...
if (dev->of_node) {
...
irq = of_irq_get(dev->of_node, 0);
}
...
client->irq = irq;
...
status = driver->probe(client, i2c_match_id(driver->id_table, client));
}
So, client->irq will already contain IRQ number in your driver's probe function.
As for IRQ flags: of_irq_get() (in code above) eventually calls irqd_set_trigger_type(), which internally stores IRQ flags (read from device tree) for your interrupt number. So, when you call devm_request_irq(), it eventually ends up in __setup_irq(), and it does next:
/*
* If the trigger type is not specified by the caller,
* then use the default for this interrupt.
*/
if (!(new->flags & IRQF_TRIGGER_MASK))
new->flags |= irqd_get_trigger_type(&desc->irq_data);
where:
new->flags contains flags you provided to devm_request_irq()
irqd_get_trigger_type() returns flags obtained from DT
In other words, if you don't pass IRQ flags to devm_request_irq() (e.g. pass 0), it will use IRQ flags obtained from device tree.
See also this question for details.
In platform drivers
You can use platform_get_irq() to obtain IRQ number. It also stores (internally) IRQ flags obtained from DT, so if you pass flags=0 to devm_request_irq(), flags from DT will be used.
Manually
If your driver doesn't rely on kernel frameworks, you have to obtain IRQ values manually:
IRQ number can be obtained (as you mentioned) by irq_of_parse_and_map(); this function not only returns IRQ number, but also stores IRQ flags for your IRQ number (by calling irqd_set_trigger_type() eventually); stored IRQ flags will be automatically used in devm_request_irq(), if you don't pass IRQ trigger type to it (e.g. you can pass flags=0)
IRQ flags can be obtained by irq_get_trigger_type(), but only after executing irq_of_parse_and_map()
So probably you only need to run irq_of_parse_and_map() and let devm_request_irq() handle flags for you (just make sure you don't pass trigger flags to it).
I am trying to write a Linux kernel module to map some address back to the user using dma_common_mmap(). I then want the user to mmap and write/read the address space.
My main problem now is that I can't find the documentation for dma_common_mmap(), does any exist? I have googled but didn't find out how to use it and let the user read/write the address.
The documentation for dma_common_mmap() doesn't exist. But you can look at Doxygen comment for dma_mmap_attrs() function:
/**
* dma_mmap_attrs - map a coherent DMA allocation into user space
* #dev: valid struct device pointer, or NULL for ISA and EISA-like devices
* #vma: vm_area_struct describing requested user mapping
* #cpu_addr: kernel CPU-view address returned from dma_alloc_attrs
* #handle: device-view address returned from dma_alloc_attrs
* #size: size of memory originally requested in dma_alloc_attrs
* #attrs: attributes of mapping properties requested in dma_alloc_attrs
*
* Map a coherent DMA buffer previously allocated by dma_alloc_attrs
* into user space. The coherent DMA buffer must not be freed by the
* driver until the user space mapping has been released.
*/
static inline int
dma_mmap_attrs(struct device *dev, struct vm_area_struct *vma, void *cpu_addr,
dma_addr_t dma_addr, size_t size, struct dma_attrs *attrs)
{
struct dma_map_ops *ops = get_dma_ops(dev);
BUG_ON(!ops);
if (ops->mmap)
return ops->mmap(dev, vma, cpu_addr, dma_addr, size, attrs);
return dma_common_mmap(dev, vma, cpu_addr, dma_addr, size);
}
#define dma_mmap_coherent(d, v, c, h, s) dma_mmap_attrs(d, v, c, h, s, NULL)
dma_mmap_attrs() calls in turn dma_common_mmap(), so all the documentation (except for attrs param) applies to dma_common_mmap() as is.
EDIT
I think you should use dma_mmap_coherent() (along with dma_alloc_coherent()), which does pretty much the same as dma_common_mmap() (see code above). See this example to get some clue on how to use it both in kernel side and in user-space. See also how dma_mmap_coherent() is used in ALSA kernel code, in snd_pcm_lib_default_mmap() function.
Can anyone tell me how a Char Driver is bind to the corresponding physical device?
Also, I would like to know where inside a char driver we are specifying the physical device related information, which can be used by kernel to do the binding.
Thanks !!
A global array — bdev_map for block and cdev_map for character devices — is used to implement a hash table, which employs the device major number as hash key.
while registering for char driver following calls get in invoked to get major and minor numbers.
int register_chrdev_region(dev_t from, unsigned count, const char *name)
int alloc_chrdev_region(dev_t *dev, unsigned baseminor, unsigned count,
const char *name);
After a device number range has been obtained, the device needs to be activated by adding it to the character device database.
void cdev_init(struct cdev *cdev, const struct file_operations *fops);
int cdev_add(struct cdev *p, dev_t dev, unsigned count);
Here on cdev structure initialize with file operation and respected character device.
Whenever a device file is opened, the various filesystem implementations invoke the init_special_inode function to create the inode for a block or character device file.
void init_special_inode(struct inode *inode, umode_t mode, dev_t rdev)
{
inode->i_mode = mode;
if (S_ISCHR(mode)) {
inode->i_fop = &def_chr_fops;
inode->i_rdev = rdev;
} else if (S_ISBLK(mode)) {
inode->i_fop = &def_blk_fops;
inode->i_rdev = rdev;
}
else
printk(KERN_DEBUG "init_special_inode: bogus i_mode (%o)\n",
mode);
}
now the default_chr_fpos chrdev_open() method will get invoked. which will look up for the inode->rdev device in cdev_map array and will get a instance of cdev structure. with the reference to cdev it will bind the file->f_op to cdev file operation and invoke the open method for character driver.
In a character driver like I2C client driver, We specify the slave address in the client structure's "addr" field and then call i2c_master_send() or i2c_master_receive() on this client . This calls will ultimately go to the main adapter controlling that line and the adapter then communicates with the device specified by the slave address.
And the binding of drivers operations is done mainly with cdev_init() and cdev_add() functions.
Also driver may choose to provide probe() function and let kernel find and bind all the devices which this driver is capable of supporting.
struct i2c_algorithm has function pointer template for master_xfer for i2c bus implementation. Where can I find the default function routine of master_xfer in linux kernel source.?
Please someone guide me..
What master_xfer is set to depends on your platform and bus. Look under drivers/i2c/busses/ to find where this function pointer is set. Note that it could be set to NULL.
An example of where it is set is in drivers/i2c/busses/i2c-pxa.c:
static const struct i2c_algorithm i2c_pxa_algorithm = {
.master_xfer = i2c_pxa_xfer,
.functionality = i2c_pxa_functionality,
};
Also look at include/linux/i2c.h:
struct i2c_algorithm {
/* If an adapter algorithm can't do I2C-level access, set master_xfer
to NULL. If an adapter algorithm can do SMBus access, set
smbus_xfer. If set to NULL, the SMBus protocol is simulated
using common I2C messages */
/* master_xfer should return the number of messages successfully
processed, or a negative value on error */
int (*master_xfer)(struct i2c_adapter *adap, struct i2c_msg *msgs,
int num);
int (*smbus_xfer) (struct i2c_adapter *adap, u16 addr,
unsigned short flags, char read_write,
u8 command, int size, union i2c_smbus_data *data);
/* To determine what the adapter supports */
u32 (*functionality) (struct i2c_adapter *);
};
:
* An i2c_msg is the low level representation of one segment of an I2C
* transaction. It is visible to drivers in the #i2c_transfer() procedure,
* to userspace from i2c-dev, and to I2C adapter drivers through the
* #i2c_adapter.#master_xfer() method.
*
There is i2c-gpio.c file in /driver/i2c/busses/. In that we are filling master_xfer function with bit_xfer. It does bit banging implementation.