passing/sharing the data using workQueue in linux kernel - linux-kernel

Please can anyone help me in understanding "Passing data(shared or private)" to workQueue ?
1: Declare a call back/work handler
static void sample_work_fn(struct work_struct *Wq)
{
...........
...........
}
2: fill in a work_struct structure (statically)
static DECLARE_WORK(sample_work, sample_work_fn);
3: Schedule a workqueue
static void samp_sysrq(int arg)
{
schedule_work(sample_work);
}
Here if I need to pass/share data using my work queue, how is it possible?

Thanks Benjamin,
I found the information in "Linux transfer parameter for function in DECLARE_WORK" is really straight forward and it really Helped me to understand. this links says
DECLARE_WORK is primarily for static work items, where no instance data is needed. You want INIT_WORK. Use this to initialize a work_struct that is a member of another structure (of your choosing), then use container_of in the callback to get the pointer to the containing structure. This container_of pattern is extremely common in the Linux kernel, so it's a good idea to get familiar with it!

Related

C++ pointer function and new keyword

Good Day,
I usually find it best to look at other people's code when I try to learn a programming language.
I am now trying to learn C++ but having some trouble understanding the following function (as an example):
Vehicle* MyClass::GetVehicleByID(uint id)
{
Vehicle* car = new Vehicle;
car->model = vehiclesArray[id].model;
return car;
}
int main()
{
Vehicle* car = myClass.GetVehicleID(0);
std::cout << "Car Model: " << car->model << std::endl;
}
I think I understand the concept of pointers. But I don't understand when this object will be destroyed. Will I have to manually delete the object "car" in the main function? Also, why are we using the new keyword instead of just using "Vehicle car();"? As I understand it the new keyword will allocate memory before populating it with the object?
Am I completely out of my depth just by asking these questions? Where or how can I learn to understand what is going on in this code? Because it seems like all tutorials only explain the "basics" like what a pointer is and the most basic way of using them.
Any help would be much appreciated.
Yes, if you create an object with new keyword, you will have to delete it with delete keyword. There is no garbage collection in C++. So in your case, this would be:
delete car;
Also, the difference between creating with new and just using the constructor directly as you suggest is that with new, the object is created on the heap, and it's lifetime is extended until it is explicitly deleted by the programmer. In the other case, it will be created on the stack and will be deleted automatically as soon as the enclosing funcion or block is exited.
What happens in your case is that you create an object on the heap and never delete it. This causes a so-called memory leak. Since your program is small, this is not an issue and this memory is released after the program finishes. However, in a case of a long running program, or a program that often allocates on the heap, it can cause the program to run out of available memory.
Also note that you could create an object inside the function, change the signature to return an object instead of a pointer, and have the function return that object directly. This would work, but what would happen is that first an object local to the function would be created on the stack. Then that object would be copied into another object created in the main function, and the first object would then be deleted. This is not very efficient, that is why a pointer to an objecet allocated on the heap is used. One reason more to use the heap would be for storing large objects. The stack is small compared to the heap, and should not be used to store very large objects.
I hope this clarifies it a bit, but to understand all this well takes a lot of time and work, on stackoverflow answer is not enough. Iwould suggest to read more about differences between objects on heap and object on stack in C++. There is an abundance of information online.
Just for your question of code.
Maybe it's better to use shared_ptr instead of pointer
#include <memory>
std::shared_ptr<Vehicle> MyClass::GetVehicleByID(uint id)
{
std::shared_ptr<Vehicle> car = std::make_shared<Vehicle>();
car->model = vehiclesArray[id].model;
return car;
}
int main()
{
std::shared_ptr<Vehicle> car = myClass.GetVehicleID(0);
std::cout << "Car Model: " << car->model << std::endl;
}
the class shared_ptr is based on RAII guidelines. So you may not have to delete it. When main() end, the destructor shared_ptr::~shared_ptr() will be called and the pointer will be delete.
Anyway, it's a good idea to read <C++ Primer>

How the globals are initialized before entry point?

I'm trying to figure out how Windows manages to map the memory of a PE file into the address space, so I've seen something that makes me confused.
Let's say we have something like this:
HMODULE some_module = GetModuleHandleA(NULL);
int main() { // Or DllMain doesn't matter
// some operations using some_module or whatever
return 0;
}
The initialization of some_module is performed before entry point is called. I'm trying to implement this looking into the PE file (I found the initialization functions), but only thing I can see is that those initialization functions are used as RUNTIME_FUNCTION, nothing else. How can I extract those initialization functions among all the runtime functions and call them manually? Are there any documentation about this? I also tried a function called RtlAddFunctionTable but I think it's not made for that. What kind of operations can performed to implement that? Thanks.
Problem is solved, was about a different thing. But I had some research and see that those entries (runtime functions, includes static initializations) are already called in entry point. Those functions are specified as some memory range and called by a function called "ucrtbase!initterm" (or "ucrtbase!_initterm"). In some PE files that initterm function is compiled as a new function, instead of using an import from ucrtbase. And finally, those functions are called in an order of where they're located in memory (lower-address -> upper-address).

Java need advices

Im designing a small library and sometimes i write a couple lines and it just doesn't feel right, so i'd like to get the opinions/advices of an experimented java programmer.
Ive got a listener which handle 3 differents events and in one of my class I implement the methods that will actually fire the events
So what i did at first was something like this:
protected final void fireOperationStarted(){
OperationEvent event = new OperationEvent(this);
for (OperationListener listener : listeners) {
listener.operationStarted(event);
}
}
protected final void fireOperationEnded(){
OperationEvent event = new OperationEvent(this);
for (OperationListener listener : listeners) {
listener.operationEnded(event);
}
//omitted the 3rd method on purpose
but this code felt wrong because if someone want to implement their own event, they will basically need access to the whole listener arraylist (CopyOnWriteArraylist) and write the logic again and again.
So what i opted for is a Fireable interface with a single method "fire". And this is what i've done:
protected final void fireOperationStarted(){
fireOperation(new Fireable(){
#Override
public void fire(OperationListener listener, OperationEvent event) {
listener.operationStarted(event);
}
});
}
protected final void fireOperationEnded(){
fireOperation(new Fireable(){
#Override
public void fire(OperationListener listener, OperationEvent event) {
listener.operationEnded(event);
}
});
}
protected void fireOperation(Fireable fireable){
OperationEvent event = new OperationEvent(this);
for (OperationListener listener : listeners) {
fireable.fire(listener, event);
}
}
I'd like to get your opinions, I personally think its better than the first implementation even there is still a lot of boilerplate code. Maybe there is a better way to do this ? I've looked in the java.awt.events package source code to see how they were dealing with multiple events and how they fire them, but it seem way too complicated for my needs.
One thing that i was wondering also is about the lambda expression in Java 8, if i use them without importing any Java 8 packages and i compile, will it work on the JRE7 ?
Could be great to use the JDK8 to make my codes cleaner eventually.
Thanks for your help !
I think your first example is better. listeners has got to be an instance field, and so readily available to everybody.
(You might have only one method in OperationListener and use a value in OperationEvent to determine which action is involved. Then your methods could all pass the proper event to one method that calls the one listener method.)
Your second idea is interesting, but for use inside one instance of one class, I think it's overkill.
There's all different kinds of ways to store listeners. If you're not adding and removing them too fast, ArrayList is good. If there's any chance of adding and removing them on different threads and you're calling the listeners frequently, CopyOnWriteArrayList is much better.
Don't worry too much about "boilerplate". Java tends to go with wordy-but-simple as regards low level code. The two for loops in your first example call out to be combined somehow, but it's not worth worrying about until you've got a lot more of them.
Lambdas will reduce your lines of code (if you use simple ones; my C# lambdas all end up running 20 lines or more; might as well be anonymous classes!), but they'll add plenty of pages to the language manual. However, lambdas aren't there till JRE 8.

C++/CLI Resource Management Confusion

I am extremely confused about resource management in C++/CLI. I thought I had a handle (no pun intended) on it, but I stumbled across the auto_gcroot<T> class while looking through header files, which led to a google search, then the better part of day reading documentation, and now confusion. So I figured I'd turn to the community.
My questions concern the difference between auto_handle/stack semantics, and auto_gcroot/gcroot.
auto_handle: My understanding is that this will clean up a managed object created in a managed function. My confusion is that isn't the garbage collector supposed to do that for us? Wasn't that the whole point of managed code? To be more specific:
//Everything that follows is managed code
void WillThisLeak(void)
{
String ^str = gcnew String ^();
//Did I just leak memory? Or will GC clean this up? what if an exception is thrown?
}
void NotGoingToLeak(void)
{
String ^str = gcnew String^();
delete str;
//Guaranteed not to leak, but is this necessary?
}
void AlsoNotGoingToLeak(void)
{
auto_handle<String ^> str = gcnew String^();
//Also Guaranteed not to leak, but is this necessary?
}
void DidntEvenKnowICouldDoThisUntilToday(void)
{
String str();
//Also Guaranteed not to leak, but is this necessary?
}
Now this would make sense to me if it was a replacement for the C# using keyword, and it was only recommended for use with resource-intensive types like Bitmap, but this isnt mentioned anywhere in the docs so im afraid ive been leaking memory this whole time now
auto_gcroot
Can I pass it as an argument to a native function? What will happen on copy?
void function(void)
{
auto_gcroot<Bitmap ^> bmp = //load bitmap from somewhere
manipulateBmp(bmp);
pictureBox.Image = bmp; //Is my Bitmap now disposed of by auto_gcroot?
}
#pragma unmanaged
void maipulateBmp(auto_gcroot<Bitmap ^> bmp)
{
//Do stuff to bmp
//destructor for bmp is now called right? does this call dispose?
}
Would this have worked if I'd used a gcroot instead?
Furthermore, what is the advantage to having auto_handle and auto_gcroot? It seems like they do similar things.
I must be misunderstanding something for this to make so little sense, so a good explanation would be great. Also any guidance regarding the proper use of these types, places where I can go to learn this stuff, and any more good practices/places I can find them would be greatly appreciated.
thanks a lot,
Max
Remember delete called on managed object is akin to calling Dispose in C#. So you are right, that auto_handle lets you do what you would do with the using statement in C#. It ensures that delete gets called at the end of the scope. So, no, you're not leaking managed memory if you don't use auto_handle (the garbage collector takes care of that), you are just failing to call Dispose. there is no need for using auto_handle if the types your dealing with do not implement IDisposable.
gcroot is used when you want to hold on to a managed type inside a native class. You can't just declare a manged type directly in a native type using the hat ^ symbol. You must use a gcroot. This is a "garbage collected root". So, while the gcroot (a native object) lives, the garbage collector cannot collect this object. When the gcroot is destroyed, it lets go of the reference, and the garbage collector is free to collect the object (assuming it has no other references). You declare a free-standing gcroot in a method like you've done above--just use the hat ^ syntax whenever you can.
So when would you use auto_gcroot? It would be used when you need to hold on to a manged type inside a native class AND that managed type happens to implement IDisposable. On destruction of the auto_gcroot, it will do 2 things: call delete on the managed type (think of this as a Dispose call--no memory is freed) and free the reference (so the type can be garbage collected).
Hope it helps!
Some references:
http://msdn.microsoft.com/en-us/library/aa730837(v=vs.80).aspx
http://msdn.microsoft.com/en-us/library/481fa11f(v=vs.80).aspx
http://www.codeproject.com/Articles/14520/C-CLI-Library-classes-for-interop-scenarios

Pointers in and out of DLLs

Is it possible to pass a pointer to an object into a DLL, initialize it, and then use the initialized pointer in the main application? If so how? Are there any good articles on the subject, perhaps a tutorial?
I have read this article http://msdn.microsoft.com/en-us/library/ms235460.aspx But that did not seem to get me any where. Maybe I am misinterpreting it...
Yes, this is fine, but assuming your DLL has dynamically allocated the data being pointed to by the buffer, you must be careful about how you free it. There are a few ways to deal with this:
The DLL documents a method by which one should free the data (i.e., CoTaskFree)
The DLL exposes a function that should be called to later free the data
The DLL and the caller are using a common DLL-based runtime; this allows the caller to use the C++ delete operator
Yes.
Assuming you are using Microsoft Visual Studio as your development environment you can export a class rather directly from a dll. Add a define to your dll project something like BUILDING_THE_DLL and the following code snippit will export a c++ class wholesale from the dll.
#ifdef BUILDING_THE_DLL
#define DLLEXPORT __declspec(dllexport)
#else
#define DLLEXPORT __declspec(dllimport)
#endif
class EXPORT DllClass
{
....
};
This is a highly coupled solution and only works if you build the application and its dll using the same development environment, and rebuild both whenever the class definition changes in any way. this method is heavilly used by the MFC library.
To achieve a greater independence between the dll and app, one typically defines a relatively immutable interface and uses that to make builds more independent, and possibly use different build environments.
An implementation in your dll would look something like this:
struct IMyInterface {
virtual void Destroy() =0;
virtual void Method() = 0;
};
class MoDllObject : public IMyInterface
{
// implementation
};
bool EXPORT DllGetInterface(IMyInterface** ppOut)
{
*ppOut = new MyDllObject();
return true;
}

Resources