how to access this bitrate value in this union ? - linux-kernel

I'm writing a kernel module that needs to read the value of bitrate from this union :
union iwreq_data
{
.......
struct iw_param bitrate; /* default bit rate */
....
}
this code is taken form wireless.h anyone knows how I can access get its' value?I mean what struct should I use net_device?wireless_dev? I'm using linux kernel 2.6.35

If you have union iwreq_data *data, you can simply use data->bitrate.value.
But this structure doesn't permanently exist, so you can't get a pointer to it for a given device. The structure is used when setting or getting parameters for a device, and exists only for the duration of the set/get operation.
When setting the bitrate, the driver saves the new value in a driver-dependent manner, and the structure is released (it's normally allocated on the stack of the setting function).
You can try calling ieee80211softmac_wx_get_rate to get it. Give it a pointer to an uninitialized union iwreq_data, and it will fill in the bit rate.

Related

CUDA dynamic parallelism: Access child kernel results in global memory

I am currently trying my first dynamic parallelism code in CUDA. It is pretty simple. In the parent kernel I am doing something like this:
int aPayloads[32];
// Compute aPayloads start values here
int* aGlobalPayloads = nullptr;
cudaMalloc(&aGlobalPayloads, (sizeof(int) *32));
cudaMemcpyAsync(aGlobalPayloads, aPayloads, (sizeof(int)*32), cudaMemcpyDeviceToDevice));
mykernel<<<1, 1>>>(aGlobalPayloads); // Modifies data in aGlobalPayloads
cudaDeviceSynchronize();
// Access results in payload array here
Assuming that I do things right so far, what is the fastest way to access the results in aGlobalPayloads after kernel execution? (I tried cudaMemcpy() to copy aGlobalPayloads back to aPayloads but cudaMemcpy() is not allowed in device code).
You can directly access the data in aGlobalPayloads from your parent kernel code, without any copying:
mykernel<<<1, 1>>>(aGlobalPayloads); // Modifies data in aGlobalPayloads
cudaDeviceSynchronize();
int myval = aGlobalPayloads[0];
I'd encourage careful error checking (Read the whole accepted answer here). You do it in device code the same way as in host code. The programming guide states: "May not pass in local or shared memory pointers". Your usage of aPayloads is a local memory pointer.
If for some reason you want that data to be explicitly put back in your local array, you can use in-kernel memcpy for that:
memcpy(aPayloads, aGlobalPayloads, sizeof(int)*32);
int myval = aPayloads[0]; // retrieves the same value
(that is also how I would fix the issue I mention in item 2 - use in-kernel memcpy)

cdev_alloc() vs cdev_init()

In Linux kernel modules, two different approaches can be followed when creating a struct cdev, as suggested in this site and in this answer:
First approach, cdev_alloc()
struct cdev *my_dev;
...
static int __init example_module_init(void) {
...
my_dev = cdev_alloc();
if (my_dev != NULL) {
my_dev->ops = &my_fops; /* The file_operations structure */
my_dev->owner = THIS_MODULE;
}
else
...
}
Second approach, cdev_init()
static struct cdev my_cdev;
...
static int __init example_module_init(void) {
...
cdev_init(&my_cdev, my_fops);
my_cdev.owner = THIS_MODULE;
...
}
(assuming that my_fops is a pointer to an initialized struct file_operations).
Is the first approach deprecated, or still in use?
Can cdev_init() be used also in the first approach, with cdev_alloc()? If no, why?
The second question is also in a comment in the linked answer.
Can cdev_init() be used also in the first approach, with cdev_alloc()?
No, cdev_init shouldn't be used for a character device, allocated with cdev_alloc.
At some extent, cdev_alloc is equivalent to kmalloc plus cdev_init. So calling cdev_init for a character device, created with cdev_alloc, has no sense.
Moreover, a character device allocated with cdev_alloc contains a hint that the device should be deallocated when no longer be used. Calling cdev_init for that device will clear that hint, so you will get a memory leakage.
Selection between cdev_init and cdev_alloc depends on a lifetime you want a character device to have.
Usually, one wants lifetime of a character device to be the same as lifetime of the module. In that case:
Define a static or global variable of type struct cdev.
Create the character device in the module's init function using cdev_init.
Destroy the character device in the module's exit function using cdev_del.
Make sure that file operations for the character device have .owner field set to THIS_MODULE.
In complex cases, one wants to create a character device at specific point after module's initializing. E.g. a module could provide a driver for some hardware, and a character device should be bound with that hardware. In that case the character device cannot be created in the module's init function (because a hardware is not detected yet), and, more important, the character device cannot be destroyed in the module's exit function. In that case:
Define a field inside a structure, describing a hardware, of pointer type struct cdev*.
Create the character device with cdev_alloc in the function which creates (probes) a hardware.
Destroy the character device with cdev_del in the function which destroys (disconnects) a hardware.
In the first case cdev_del is called at the time, when the character device is not used by a user. This guarantee is provided by THIS_MODULE in the file operations: a module cannot be unloaded if a file, corresponded to the character device, is opened by a user.
In the second case there is no such guarantee (because cdev_del is called NOT in the module's exit function). So, at the time when cdev_del returns, a character device can be still in use by a user. And here cdev_alloc really matters: deallocation of the character device will be deferred until a user closes all file descriptors associated with the character device. Such behavior cannot be obtained without cdev_alloc.
They do different things. The preference would be usual - prefer not to use dynamic allocation when not needed and allocate on stack when it's possible.
cdev_alloc() dynamically allocates my_dev, so it will call kfree(pointer) when cdev_del().
cdev_init() will not free the pointer.
Most importantly, the lifetime of the structure my_cdev is different. In cdev_init() case struct cdev my_cdev is bound to the containing lexical scope, while cdev_alloc() returns dynamically allocate pointer valid up until free-d.

Use local variable as capture value in boost Asio bind

I asked a question about boost::asio here but some additional questions came up today. I have some very simple Server Client structure and at one point this async_write command:
ushort _nSetupReceiveBuffer[_nDynamicSize];
boost::asio::async_write(m_oSocket, boost::asio::buffer(&_nSetupReceiveBuffer, _nDynamicSize),
[this, self](boost::system::error_code _oError, std::size_t)
{
std::cout << _nSetupReceiveBuffer.size() << std::endl;
});
Unfortunately it results in error: ‘_nSetupReceiveBuffer’ is not captured error.
So my questions are:
How can I capture nSetupReceiveBuffer or better capture its reference? (Capturing its reference with [this, self, &_nSetupReceiveBuffer] builds but results in Segmentation fault (core dumped) error even before any data is received, since, I assume, it is executed as callback the original variable is already deleted.)
I use ushort because I want to transmit cv::Mat images with CV_16U setting and tried to follow this idea. Do you have other ideas how to transmit cv::Mat files via boost? I can only use lossless containers, but I want to avoid high CPU loads. In my case the bandwith shouldn't be the problem. However that is only half of the truth, I tried to use a serializer which serializes the image to a string, which worked fine but increased the size dramatically :-(
To avoid the problems from 1. I could make _nSetupReceiveBuffer a member variable, but I need to allocate it dynamically at runtime, which I don't know how to do? And a second drawback would be, that the variable type ushort is fixed, but I want to be flexible, e.g. if I have another video stream of CV_8U type, I need to change it to uchar.
My last approach was to reshape() the cv::Mat and use a std::vector<ushort> as member variabel which stores the single values. Is that reasonable or only generating large overhead? Can I define a vector as member variable without a type specified, or should I define one for each type and then just use the appropriate one?
Thank you for your help.
How can I capture nSetupReceiveBuffer or better capture its reference? (Capturing its reference with [this, self, &_nSetupReceiveBuffer] builds but results in Segmentation fault (core dumped) error even before any data is received, since, I assume, it is executed as callback the original variable is already deleted.)
Capturing the reference of a variable that will disappear is useless.
Capturing the variable means you can not pass it to the async call.
To avoid the problems from 1. I could make _nSetupReceiveBuffer a member variable, but I need to allocate it dynamically at runtime, which I don't know how to do?
Use std::vector. This should be lesson 1 in C++.
>And a second drawback would be, that the variable type ushort is fixed, but I want to be flexible, e.g. if I have another video stream of CV_8U type, I need to change it to uchar.
This indicates you should probably separate the IO operation from your class. If you make a type to represent the IO operation, you can make the buffer a member of that type.
My last approach was to reshape() the cv::Mat and use a std::vector as member variabel which stores the single values. Is that reasonable or only generating large overhead? Can I define a vector as member variable without a type specified, or should I define one for each type and then just use the appropriate one?
I'd suggest separating the serialization concerns, and use Boost Serialization, like e.g. given here: Serializing OpenCV Mat_<Vec3f>
If you can be sure your matrices are always continuous, you could use the buffer directly:
boost::asio::async_write(m_oSocket, boost::asio::buffer(mat.ptr(), mat.total()),
[this, self](boost::system::error_code ec, std::size_t tranferred)
{
std::cout << tranferred << std::endl;
});
Of course, you'll have the same lifetime considerations as with the nSetupReceiveBuffer described above. Also, keep in mind that other threads should not touch the data until the IO operation completes.

How should i find size in OMNet++?

I need to find packet size sent by each node in OMNeT++. Do i need to set it by myself or is there any way of finding the packet size which is changing dynamically.
Kindly tell me the procedure of finding the Packet size?
I think what you're trying to say is, where can you find the "inherent" size of a packet, for example of one that has been defined in a .msg file, based on "what's in it".
If I'm right: You can't. And shouldn't really want to. Since everything inside an OMNeT++ simulation is... simulation, no matter what the actual contents of a cPacket are, the bitLength property can be set to any value, with no regard to the amount of information stored in your custom messages.
So the only size any packet will have is the size set either by you manually, or by the model library you are using, with the setBitLength() method.
It is useful in scenarios where a protocol header has some fields that are of some weird length, like 3 bits, and then 9 bits, and 1 flag bit, etc. It is best to represent these fields as separate members in the message class, and since C++ doesn't have* these flexible size data types, the representation in the simulation and the represented header will have different sizes.
Or if you want to cheat, and transmit extra information with a packet, that wouldn't really be a part of it on a real network, in the actual bit sequence.
So you should just set the appropriate length with setBitLength, and don't care about what is actually stored. Usually. Until your computer runs out of memory.
I might be completely wrong about what you're trying to get to.
*Yes, there are bit fields, but ... it's easier not having to deal with them.
If you are talking about cPakets in OMNeT++, then simply use the according getter methods for the length of a packet. That is for cases where the packets have a real size set either by you or in your code.
From the cpacket.h in the OMNeT 5.1 release:
/**
* Returns the packet length (in bits).
*/
virtual int64_t getBitLength() const {return bitLength;}
/**
* Returns the packet length in bytes, that is, bitlength/8. If bitlength
* is not a multiple of 8, the result is rounded up.
*/
int64_t getByteLength() const {return (getBitLength()+7)>>3;}
So simply read the value, maybe write it into a temporary variable and use it for whatever you need it.

Why does DeviceIoControl prepend 12 bytes of information to the user-provided input buffer?

I hope this does not turn out to be a totally braindead question.
I am editing a template WDF Windows USB device driver to send formatted data to one of the device's bulk out pipes; the data has to be set up in a certain way to tell the device to read an internal register.
The problem is that I cannot get the data to go across the bus in the exact format necessary. I wrote a small test app to enumerate the device and call DeviceIoControl with the input buffer set to a struct I set up according to spec.
I have a copy of a USB bus trace for a working case (performed by a driver whose source I have no access to), and I captured a bus trace for what happens when I call the custom IOCTL in my driver. What I see go across the bus is the data structure I set up prefixed with twelve bytes of data; the data structure is correct, but I want to know what the initial twelve bytes of data are, and stop the driver from sending them.
The driver, I believe, has been written properly; I put some debug traces in the driver and it looks like the buffer retrieved by WdfRequestRetrieveInputMemory already has the 12 bytes prepended, so this seems like this is happening pre-driver.
If it is useful information, the IOCTL is set up as METHOD_BUFFERED with FILE_ANY_ACCESS.
The relevant portion of the test code that sets this up is very simple:
const ULONG ulBufferSize = sizeof( CONTROL_READ_DATA );
unsigned char pBuffer[sizeof(CONTROL_READ_DATA)];
DWORD dwBytesReturned;
CONTROL_READ_DATA* readData = (CONTROL_READ_DATA*)pBuffer;
readData->field1 = data;
readData->field2 = moreData;
// ... all fields filled in...
// Send IOCTLs into camera
if( !::DeviceIoControl( hDevice,
IOCTL_CUSTOM_000,
&readData,
ulBufferSize,
&readData,
ulBufferSize,
&dwBytesReturned,
NULL ) )
{
dwError = ::GetLastError();
// Clean up here
return dwError;
}
The data I see go across the bus is: 80FD1200 CCCCCCCC CCCCCCCC + (My data).
Does anyone have any insights?
Wow, really ridiculous error. Notice I'm passing the address of readData to DeviceIoControl, which itself is already a pointer. I can't believe I wasted so much time on this.
Thanks all!
Alignment of the data is the culprit. Check out http://msdn.microsoft.com/en-us/library/2e70t5y1(v=vs.80).aspx to set it to one.

Resources