I need to write a kernel module that uses an acpi method to communicate to a hardware device.
At this point I just want to load the driver and enumerate the devices on the bus.
I found a fairly old but reasonable example on line, below is the basic outline. I pretty much took the example verbatim just changing names, I used acpidump to find the dsdt table get the correct device ID etc.
The driver loads fine, but my add functions are not being called. My suspicion is that I am missing a step to stimulate scanning the bus after I register it. The example assumes the driver is loaded on boot. Is there a way to request the bus be scanned after registering it such that any devices attached to a registered bus will be added? Understand that my suspicion may be wrong so if my assumptions are wrong please correct me.
below is the source:
static int viking_acpi_add(struct acpi_device *device);
static int viking_acpi_remove(struct acpi_device *device);
static void viking_acpi_notify(struct acpi_device *adev, u32 event);
static const struct acpi_device_id nv_device_ids[] = {
{ "ACPI0012", 0},
{ "", 0},
};
MODULE_DEVICE_TABLE(acpi, nv_device_ids);
static struct acpi_driver nv_acpi_driver = {
.name = "NV NVDR",
.class = "NV",
.ids = nv_device_ids,
.ops = {
.add = nv_acpi_add,
.remove = nv_acpi_remove,
.notify = nv_acpi_notify,
},
.owner = THIS_MODULE,
};
//static struct acpi_device acpi_dev;
static int nv_acpi_add(struct acpi_device *device)
{
printk("NV: acpi bus add\n");
return 0;
}
static int nv_remove(struct acpi_device *device)
{
printk("NV: acpi bus remove\n");
return 0;
}
static void nv_acpi_notify(struct acpi_device *adev, u32 event)
{
device_lock(&adev->dev);
printk("notification detected\n");
device_unlock(&adev->dev);
}
static int __init nv_init(void)
{
int result = 0;
result = acpi_bus_register_driver(&nvt_driver);
if (result < 0) {
printk("Error registering driver\n");
return -ENODEV;
}
return 0;
}
static void __exit nv_exit(void)
{
acpi_bus_unregister_driver(&nv_driver);
}
module_init(nv_init);
module_exit(nv_exit);
Well it turns out that another acpi bus driver was registered for the acpi device ID I was using and the kernel did not call my add routine as a consequence. When I ran it with a different kernel, my add routine was called as expected.
Related
Hello fellow developers,
I am developing a Linux kernel module that uses a DMA channel to transfer memory
(on STM32MP157F).
This works but additional tuning should be made. The STM mdma kernel module makes this possible by using this private configuration struct:
struct stm32_mdma_chan_config {
u32 request;
u32 priority_level;
u32 transfer_config;
u32 mask_addr;
u32 mask_data;
bool m2m_hw;
};
I would like set the priority from 0x0 to 0x3 which is its maximum allowed value.
The variable priority_level is set in stm32_mdma_of_xlate function:
static struct dma_chan *stm32_mdma_of_xlate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma)
{
struct stm32_mdma_chan_config config;
config.request = dma_spec->args[0];
config.priority_level = dma_spec->args[1];
....
}
Other modules/drivers in the system use a device tree setting like this for its and the used DMA channel configuration.
spi#44004000 {
#address-cells = <0x01>;
#size-cells = <0x00>;
dmas = <0x0e 0x25 0x400 0x01 0x0e 0x26 0x400 0x01>;
dma-names = "rx\0tx";
};
spi-stm32.c calls of_match_device in its stm32_spi_probe function. I believe the dma configuration is done during its execution.
I would like something similar for my character device driver:
mydriver#0 {
compatible = "mydriver";
dmas = <&mdma1 36 0x0 0x40008 0x0 0x0>,
<&mdma1 37 0x0 0x40002 0x0 0x0>;
dma-names = "rx", "tx";
};
But this is currently ignored because I do not have a platform device I could use to call
of_match_device. I seem to be blinded by to much source code to look at ...
Any tips for me?
Update:
Currently studying http://xillybus.com/tutorials/device-tree-zynq-3
Best regards Gunther
I solved my issues after doing some extra research.
These are the steps:
I had to add a custom entry to the device tree:
mydriver_0: mydriver#0 {
compatible = "mydriver";
dmas = <&mdma1 22 0x3 0x1200000a 0x48001008 0x00000020 1>;
dma-names = "dma0";
};
Make the driver loadable as a platform driver. Create an of_device_id match table matching the device tree entry. Change module_init() to call platform_driver_register() to register the matching table. Then implement a probe function calling the previous init function. Store pdev.
static int mydriver_drv_probe(struct platform_device *pdev)
{
// store pdev for later call to: dma_channel_req(&pdev->dev, "dma0");
// TODO init driver
return 0;
}
/* Connection to device tree */
static struct of_device_id mydriver_of_match[] =
{
{ .compatible = "mydriver" },
{}
};
MODULE_DEVICE_TABLE(of, mydriver_of_match);
static struct platform_driver mydriver_platform_driver = {
.probe = mydriver_drv_probe,
.remove = mydriver_drv_remove,
.driver = {
.name = "mydriver",
.owner = THIS_MODULE,
.of_match_table = mydriver_of_match,
},
};
static int __init _mydriver_driver_init(void)
{
return platform_driver_register(&mydriver_platform_driver);
}
module_init(_mydriver_driver_init);
Get the correctly configured DMA channel:
chan = dma_request_chan(&pdev->dev, "dma0");
I tested this and it worked. The requested DMA channel is configured using the my device tree "dmas" entry.
I hope this of use for somebody else!
I'm working on writing a kernel hwmon driver module for a chip that communicates over LPC (ISA style bus). I have the following code so far
umode_t qnap_ec_is_visible(const void* data, enum hwmon_sensor_types type, u32 attr, int channel)
{
}
int qnap_ec_read(struct device* dev, enum hwmon_sensor_types type, u32 attr, int channel, long* val)
{
}
int qnap_ec_write(struct device* dev, enum hwmon_sensor_types type, u32 attr, int channel, long val)
{
}
static const struct hwmon_ops qnap_ec_ops = {
.is_visible = qnap_ec_is_visible,
.read = qnap_ec_read,
.write = qnap_ec_write
};
static const struct hwmon_channel_info *qnap_ec_channel_info[] = {
HWMON_CHANNEL_INFO(pwm, HWMON_PWM_INPUT),
HWMON_CHANNEL_INFO(fan, HWMON_F_INPUT),
HWMON_CHANNEL_INFO(temp, HWMON_T_INPUT),
NULL
};
static const struct hwmon_chip_info qnap_ec_chip_info = {
.ops = &qnap_ec_ops,
.info = qnap_ec_channel_info
};
static int qnap_ec_probe(struct platform_device* platform_dev)
{
struct device* dev;
dev = devm_hwmon_device_register_with_info(&platform_dev->dev, "qnap_ec_hwmon", NULL,
&qnap_ec_chip_info, NULL);
return PTR_ERR_OR_ZERO(dev);
}
static const struct of_device_id qnap_ec_of_match[] = {
{ .compatible = "???" },
{}
};
MODULE_DEVICE_TABLE(of, qnap_ec_of_match);
static struct platform_driver qnap_ec_driver = {
.driver = {
.name = "qnap_ec_hwmon",
.of_match_table = qnap_ec_of_match
},
.probe = qnap_ec_probe
};
module_platform_driver(qnap_ec_driver);
however I'm pretty sure this approach (using a device ID and having the kernel call the probe function when it finds that device ID on the system) won't work for something on the LPC bus. The IT87 driver which also communicates over the LPC bus uses __init/__exit functions to enter the driver, however, that driver is very large and probably not an ideal example of a simple driver module. Are there any examples available of how to write a basic (ie: no real functionality just the skeleton) kernel hwmon driver for a LPC device? Some of the things I can't find answers for, for example, is if I use the __init/__exit functions, can I still register the driver using the devm_hwmon_device_register_with_info function or do I need to use another approach (the it87 driver for example uses the platform_driver_register function, but I'm not sure why since there doesn't seem to be any documentation on the correct approach for LPC devices).
I have a smart card IC module, and I want to create a Linux device driver for it. This module is using SPI as the controlling line and has an interrupt line to indicate whether a card is ready. I know how to create a SPI device in Linux kernel and how to read data in the kernel when the interruption happens. But I have no idea on how to transfer the data to the user space (maybe need to create a device node for it), and how to give the user space a interruption to notify it. Does anyone have some suggestion?
One way you can go about this is by creating a devfs entry and then having the interested process open that device and receive asynchronous notification from the device driver using fasync.
Once you have the notification in user space you can notify other interested processes by any means you deem fit.
I am writing a small trimmed down example illustrating this feature.
On the driver side
/* Appropriate headers */
static int myfasync(int fd, struct file *fp, int on);
static struct fasync_struct *fasyncQueue;
static struct file_operations fops =
{
.open = charDriverOpen,
.release = charDriverClose,
.read = charDriverRead,
.write = charDriverWrite,
.unlocked_ioctl = charDriverCtrl,
// This will be called when the FASYNC flag is set
.fasync = myfasync,
};
static int __init charDriverEntry()
{
// Appropriate init for the driver
// Nothing specific needs to be done here with respect to
// fasync feature.
}
static int myfasync(int fd, struct file *fp, int on)
{
// Register the process pointed to by fp to the list
// of processes to be notified when any event occurs
return fasync_helper(fd, fp, 1, &fasyncQueue);
}
// Now to the part where we want to notify the processes listed
// in fasyncQueue when something happens. Here in this example I had
// implemented the timer. Not getting in to the details of timer func
// here
static void send_signal_timerfn(unsigned long data)
{
...
printk(KERN_INFO "timer expired \n");
kill_fasync(&fasyncQueue, SIGIO, POLL_OUT);
...
}
On the user land process side
void my_notifier(int signo, siginfo_t *sigInfo, void *data)
{
printf("Signal received from the driver expected %d got %d \n",SIGIO,signo);
}
int main()
{
struct sigaction signalInfo;
int flagInfo;
signalInfo.sa_sigaction = my_notifier;
signalInfo.sa_flags = SA_SIGINFO;
sigemptyset(&signalInfo.sa_mask);
sigaction(SIGIO, &signalInfo, NULL);
int fp,i;
fp = open("/dev/myCharDevice",O_RDWR);
if (fp<0)
printf("Failed to open\n");
/*New we will own the device so that we can get the signal from the device*/
// Own the process
fcntl(fp, F_SETOWN, getpid());
flagInfo = fcntl(fp, F_GETFL);
// Set the FASYNC flag this triggers the fasync fops
fcntl(fp, F_SETFL, flagInfo|FASYNC);
...
}
Hope this clears things up.
For more detailed reading I suggest you read this
I am trying to learn to write a i2c driver on raspberry pi board and i have taken groove LCD back-light.Here driver.probe is not getting called whereas driver's inserted in system as i can see in dmesg.
Init code of driver is getting called, and code =>
static int lcd_probe(struct i2c_client *i2c_client, const struct i2c_device_id *i2c_id)
{
int ret = 0;
//struct lcd_data *lcd_data;
// struct device *dev = &i2c_client->dev;
// lcd_data->client = i2c_client;
pr_debug("lcd_probe : calling the probe\n");
pr_debug("lcd_probe : i2c_client->addr = %d, i2c_client_name = %s\n",
i2c_client->addr, i2c_client->name);
return ret;
}
static struct i2c_device_id lcd_id[] = {
{"lcd", 0},
{}
};
MODULE_DEVICE_TABLE(i2c, lcd_id);
static struct i2c_driver lcd_driver = {
.driver = {
.name = "lcd",
.owner = THIS_MODULE,
},
.probe = lcd_probe,
// .remove = lcd_remove,
// .attach_adapter = lcd_attach,
.detect = lcd_detect,
.id_table = lcd_id,
};
static int __init lcd_init(void)
{
pr_debug("lcd_init : calling init\n");
return (i2c_add_driver(&lcd_driver));
}
and dmesg =>
[ 1.971009] lcd_init : calling init
But driver.probe is not registering in i2c subsystem.
board file initialization =>
Board init code =>
/** start aartyaa lcd i2c driver */
printk(KERN_INFO "board file registering i2c lcd device\n");
i2c_register_board_info_dt(1, lcd_i2c_devices, ARRAY_SIZE(lcd_i2c_devices));
i2c_board_info code =>
/** aaryaa i2c lcd struct */
static struct i2c_board_info __initdata lcd_i2c_devices[] = {
{
.type = "lcd",
.addr = 0x62,
},
};
i added debugs in i2c_register_device
and i found driver prove device is not getting called. dmesg i have linked
dmesg link
It seems that i need to register in platform also ..
How probe gets called in i2c ... ?
Any help will be appreciated.
Thank you...!!!
Our company is rewriting most of the legacy C code in C++11. (Which also means I am a C programmer learning C++). I need advice on message handlers.
We have distributed system - Server process sends a packed message over TCP to client process.
In C code this was being done:
- parse message based on type and subtype, which are always the first 2 fields
- call a handler as handler[type](Message *msg)
- handler creates temporary struct say, tmp_struct to hold the parsed values and ..
- calls subhandler[type][subtype](tmp_struct)
There is only one handler per type/subtype.
Moving to C++11 and mutli-threaded environment. The basic idea I had was to -
1) Register a processor object for each type/subtype combination. This is
actually a vector of vectors -
vector< vector >
class MsgProcessor {
// Factory function
virtual Message *create();
virtual Handler(Message *msg)
}
This will be inherited by different message processors
class AMsgProcessor : public MsgProcessor {
Message *create() override();
handler(Message *msg);
}
2) Get the processor using a lookup into the vector of vectors.
Get the message using the overloaded create() factory function.
So that we can keep the actual message and the parsed values inside the message.
3) Now a bit of hack, This message should be send to other threads for the heavy processing. To avoid having to lookup in the vector again, added a pointer to proc inside the message.
class Message {
const MsgProcessor *proc; // set to processor,
// which we got from the first lookup
// to get factory function.
};
So other threads, will just do
Message->proc->Handler(Message *);
This looks bad, but hope, is that this will help to separate message handler from the factory. This is for the case, when multiple type/subtype wants to create same Message, but handle it differently.
I was searching about this and came across :
http://www.drdobbs.com/cpp/message-handling-without-dependencies/184429055?pgno=1
It provides a way to completely separate the message from the handler. But I was wondering if my simple scheme above will be considered an acceptable design or not. Also is this a wrong way of achieving what I want?
Efficiency, as in speed, is the most important requirement from this application. Already we are doing couple of memory Jumbs => 2 vectors + virtual function call the create the message. There are 2 deference to get to the handler, which is not good from caching point of view I guess.
Though your requirement is unclear, I think I have a design that might be what you are looking for.
Check out http://coliru.stacked-crooked.com/a/f7f9d5e7d57e6261 for the fully fledged example.
It has following components:
An interface class for Message processors IMessageProcessor.
A base class representing a Message. Message
A registration class which is essentially a singleton for storing the message processors corresponding to (Type, Subtype) pair. Registrator. It stores the mapping in a unordered_map. You can also tweak it a bit for better performance. All the exposed API's of Registrator are protected by a std::mutex.
Concrete implementations of MessageProcessor. AMsgProcessor and BMsgProcessor in this case.
simulate function to show how it all fits together.
Pasting the code here as well:
/*
* http://stackoverflow.com/questions/40230555/efficient-message-factory-and-handler-in-c
*/
#include <iostream>
#include <vector>
#include <tuple>
#include <mutex>
#include <memory>
#include <cassert>
#include <unordered_map>
class Message;
class IMessageProcessor
{
public:
virtual Message* create() = 0;
virtual void handle_message(Message*) = 0;
virtual ~IMessageProcessor() {};
};
/*
* Base message class
*/
class Message
{
public:
virtual void populate() = 0;
virtual ~Message() {};
};
using Type = int;
using SubType = int;
using TypeCombo = std::pair<Type, SubType>;
using IMsgProcUptr = std::unique_ptr<IMessageProcessor>;
/*
* Registrator class maintains all the registrations in an
* unordered_map.
* This class owns the MessageProcessor instance inside the
* unordered_map.
*/
class Registrator
{
public:
static Registrator* instance();
// Diable other types of construction
Registrator(const Registrator&) = delete;
void operator=(const Registrator&) = delete;
public:
// TypeCombo assumed to be cheap to copy
template <typename ProcT, typename... Args>
std::pair<bool, IMsgProcUptr> register_proc(TypeCombo typ, Args&&... args)
{
auto proc = std::make_unique<ProcT>(std::forward<Args>(args)...);
bool ok;
{
std::lock_guard<std::mutex> _(lock_);
std::tie(std::ignore, ok) = registrations_.insert(std::make_pair(typ, std::move(proc)));
}
return (ok == true) ? std::make_pair(true, nullptr) :
// Return the heap allocated instance back
// to the caller if the insert failed.
// The caller now owns the Processor
std::make_pair(false, std::move(proc));
}
// Get the processor corresponding to TypeCombo
// IMessageProcessor passed is non-owning pointer
// i.e the caller SHOULD not delete it or own it
std::pair<bool, IMessageProcessor*> processor(TypeCombo typ)
{
std::lock_guard<std::mutex> _(lock_);
auto fitr = registrations_.find(typ);
if (fitr == registrations_.end()) {
return std::make_pair(false, nullptr);
}
return std::make_pair(true, fitr->second.get());
}
// TypeCombo assumed to be cheap to copy
bool is_type_used(TypeCombo typ)
{
std::lock_guard<std::mutex> _(lock_);
return registrations_.find(typ) != registrations_.end();
}
bool deregister_proc(TypeCombo typ)
{
std::lock_guard<std::mutex> _(lock_);
return registrations_.erase(typ) == 1;
}
private:
Registrator() = default;
private:
std::mutex lock_;
/*
* Should be replaced with a concurrent map if at all this
* data structure is the main contention point (which I find
* very unlikely).
*/
struct HashTypeCombo
{
public:
std::size_t operator()(const TypeCombo& typ) const noexcept
{
return std::hash<decltype(typ.first)>()(typ.first) ^
std::hash<decltype(typ.second)>()(typ.second);
}
};
std::unordered_map<TypeCombo, IMsgProcUptr, HashTypeCombo> registrations_;
};
Registrator* Registrator::instance()
{
static Registrator inst;
return &inst;
/*
* OR some other DCLP based instance creation
* if lifetime or creation of static is an issue
*/
}
// Define some message processors
class AMsgProcessor final : public IMessageProcessor
{
public:
class AMsg final : public Message
{
public:
void populate() override {
std::cout << "Working on AMsg\n";
}
AMsg() = default;
~AMsg() = default;
};
Message* create() override
{
std::unique_ptr<AMsg> ptr(new AMsg);
return ptr.release();
}
void handle_message(Message* msg) override
{
assert (msg);
auto my_msg = static_cast<AMsg*>(msg);
//.... process my_msg ?
//.. probably being called in some other thread
// Who owns the msg ??
(void)my_msg; // only for suppressing warning
delete my_msg;
return;
}
~AMsgProcessor();
};
AMsgProcessor::~AMsgProcessor()
{
}
class BMsgProcessor final : public IMessageProcessor
{
public:
class BMsg final : public Message
{
public:
void populate() override {
std::cout << "Working on BMsg\n";
}
BMsg() = default;
~BMsg() = default;
};
Message* create() override
{
std::unique_ptr<BMsg> ptr(new BMsg);
return ptr.release();
}
void handle_message(Message* msg) override
{
assert (msg);
auto my_msg = static_cast<BMsg*>(msg);
//.... process my_msg ?
//.. probably being called in some other thread
//Who owns the msg ??
(void)my_msg; // only for suppressing warning
delete my_msg;
return;
}
~BMsgProcessor();
};
BMsgProcessor::~BMsgProcessor()
{
}
TypeCombo read_from_network()
{
return {1, 2};
}
struct ParsedData {
};
Message* populate_message(Message* msg, ParsedData& pdata)
{
// Do something with the message
// Calling a dummy populate method now
msg->populate();
(void)pdata;
return msg;
}
void simulate()
{
TypeCombo typ = read_from_network();
bool ok;
IMessageProcessor* proc = nullptr;
std::tie(ok, proc) = Registrator::instance()->processor(typ);
if (!ok) {
std::cerr << "FATAL!!!" << std::endl;
return;
}
ParsedData parsed_data;
//..... populate parsed_data here ....
proc->handle_message(populate_message(proc->create(), parsed_data));
return;
}
int main() {
/*
* TODO: Not making use or checking the return types after calling register
* its a must in production code!!
*/
// Register AMsgProcessor
Registrator::instance()->register_proc<AMsgProcessor>(std::make_pair(1, 1));
Registrator::instance()->register_proc<BMsgProcessor>(std::make_pair(1, 2));
simulate();
return 0;
}
UPDATE 1
The major source of confusion here seems to be because the architecture of the even system is unknown.
Any self respecting event system architecture would look something like below:
A pool of threads polling on the socket descriptors.
A pool of threads for handling timer related events.
Comparatively small number (depends on application) of threads to do long blocking jobs.
So, in your case:
You will get network event on the thread doing epoll_wait or select or poll.
Read the packet completely and get the processor using Registrator::get_processor call.
NOTE: get_processor call can be made without any locking if one can guarantee that the underlying unordered_map does not get modified i.e no new inserts would be made once we start receiving events.
Using the obtained processor we can get the Message and populate it.
Now, this is the part that I am not that sure of how you want it to be. At this point, we have the processor on which you can call handle_message either from the current thread i.e the thread which is doing epoll_wait or dispatch it to another thread by posting the job (Processor and Message) to that threads receiving queue.