First of all, I tested it and it worked well.
I want to know whether it is correct or not.
funcA() {
DWORD res = WaitForSingleObject(mutex, INFINITE);
if (aaa != bbb) throw "aaa";
ReleaseMutex(mutex);
}
WaitForSingleObject always returns "0", not "WAIT_ABANDONED" or any other error codes.
I just can't find any documentation that describes Mutex would be released on "throw".
Thank you
No, the mutex won't be release on throw.
You can, however, build your own Lock class that locks the mutex on the constructor, and releases it on it's destructor. Then, if you use a Lock object in your function (using the memory stack, not the heap), you can ensure that the destructor for that object will be called and the mutex released.
This is exactly what the CSingleLock class does in MFC.
It won't unless releasing is called implicitly in destructor or catch block. Do you show the whole code needed to describe it?
Related
FreeRTOS and ESP-IDF provide xSemaphoreCreateMutex() which allocates and initializes a mutex. Their docs say:
If a mutex is created using xSemaphoreCreateMutex() then the required
memory is automatically dynamically allocated inside the
xSemaphoreCreateMutex() function. (see
http://www.freertos.org/a00111.html).
However, I can't find any info on whether it is necessary to free the memory created by the mutex. This would be important if using C++ with a mutex member variable, like:
class MyClass
{
MyClass::MyClass()
{
mAccessMutex = xSemaphoreCreateMutex();
}
MyClass::~MyClass()
{
// What to do here??
}
SemaphoreHandle_t mAccessMutex;
}
REFERENCE
https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-reference/system/freertos.html?highlight=xsemaphore#semaphore-api
According to FreeRTOS API reference, the proper way to destroy/delete a mutex is vSemaphoreDelete()
Deletes a semaphore, including mutex type semaphores and recursive
semaphores.
Do not delete a semaphore that has tasks blocked on it.
If you're using heap_1, deleting is not possible. Also, make sure that you fully understand the perils of dynamic memory allocation in embedded systems before using it. If MyClass is going to be created and destroyed in a regular/periodic basis, this may cause problems.
So yes, it's necessary to call vSemaphoreDelete(mAccessMutex) in ~MyClass(). But it's probably best to make sure that MyClass instances never get destroyed. In my projects, I generally use one-time dynamic allocation during initialization and forget to implement a proper destructor (which is a bad habit that I need to fix).
I'm using ROS (Robot operating system) framework. If you are familiar with ROS, in my code, I'm not using activity servers. Plainly using publishers, subscribers and services. Unfortunately, I'm facing issue with pthread_recursive_mutex error. The following is the error and its backtrace.
If anyone is familiar with ROS stack, could you please share what could be potential causes that might cause this runtime error ?
I can give more information about my the runtime error. Help much appreciated. Thanks
/usr/include/boost/thread/pthread/recursive_mutex.hpp:113: void boost::recursive_mutex::lock(): Assertion `!pthread_mutex_lock(&m)' failed.
The lock method implementation merely assert the pthread return value:
void lock()
{
BOOST_VERIFY(!posix::pthread_mutex_lock(&m));
}
This means that according to the docs, either:
(EAGAIN) The mutex could not be acquired because the maximum number of
recursive locks for mutex has been exceeded.
This would indicate you have some kind of imbalance in your locks (not this call-site, because unique_lock<> makes sure that doesn't happen) or are just racking up threads that are all waiting for the same lock
(EOWNERDEAD) The mutex is a robust mutex and the process containing the
previous owning thread terminated while holding the mutex lock. The mutex
lock shall be acquired by the calling thread and it is up to the new
owner to make the state consistent.
Boost does not deal with this case and simply asserts. This would also not likely occur if all your threads use thread-safe lock-guards (scoped_lock, unique_lock, shared_lock, lock_guard). It could, however, occur, if you use the lock() (and unlock()) functions manually somewhere and the thread exits without unlock()ing
There are some other ways in which (particularly checked) mutexes can fail, but those would not apply to boost::recursive_mutex
This looks like a use-after-free problem, where a mutex has already been destroyed, probably because its owning object was deleted.
I had some success using Valgrind to hunt down this type of bugs. Install it using apt install valgrind, and add a launch-prefix="valgrind" to the <node> in your launch file. It will be super slow, but it's quite adept at pinpointing these issues.
Take this buggy program for example:
struct Test
{
int a;
};
int main()
{
Test* test = new Test();
test->a = 42;
delete test;
test->a = 0; // BUG!
}
valgrind ./testprog yields
==8348== Invalid write of size 4
==8348== at 0x108601: main (test.cpp:11)
==8348== Address 0x5b7ec80 is 0 bytes inside a block of size 4 free'd
==8348== at 0x4C3168B: operator delete(void*, unsigned long) (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==8348== by 0x108600: main (test.cpp:10)
==8348== Block was alloc'd at
==8348== at 0x4C303EF: operator new(unsigned long) (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==8348== by 0x1085EA: main (test.cpp:8)
Note how it will not only tell you where the buggy access happened (test.cpp:11), but also where the Test object was deleted (test.cpp:10), and where it was initially created (test.cpp:8).
Good luck in your bug hunt!
var p = &sync.Pool{
New: func() interface{} {
return &serveconn{}
},
}
func newServeConn() *serveconn {
sc := p.Get().(*serveconn)
runtime.SetFinalizer(sc, (*serveconn).finalize)
fmt.Println(sc, "SetFinalizer")
return sc
}
func (sc *serveconn) finalize() {
fmt.Println(sc, "finalize")
*sc = serveconn{}
runtime.SetFinalizer(sc, nil)
p.Put(sc)
}
The above code tries to reuse object by SetFinalizer, but after debug I found finalizer is never called, why?
UPDATE
This may be related:https://github.com/golang/go/issues/2368
The above code tries to reuse object by SetFinalizer, but after debug I found finalizer is never called, why?
The finalizer is only called on an object when the GC
marks it as unused and then tries to sweep (free) at the end
of the GC cycle.
As a corollary, if a GC cycle is never performed during the runtime of your program, the finalizers you set may never be called.
Just in case you might hold a wrong assumption about the Go's GC, it may worth noting that Go does not employ reference counting on values; instead, it uses GC which works in parallel with the program, and the sessions during which it works happen periodically and are triggered by certain parameters like pressure on the heap produced by allocations.
A couple assorted notes regarding finalizers:
When the program terminates, no GC is forcibly run.
A corollary of this is that a finalizer is not guaranteed
to run at all.
If the GC finds a finalizer on an object about to be freed,
it calls the finalizer but does not free the object.
The object itself will be freed only at the next GC cycle —
wasting the memory.
All in all, you appear as trying to implement destructors.
Please don't: make your objects implement the sort-of standard method called Close and state in the contract of your type that the programmer is required to call it when they're done with the object.
When a programmer wants to call such a method no matter what, they use defer.
Note that this approach works perfectly for all types in the Go
stdlib which wrap resources provided by the OS—file and socket descriptors. So there is no need to pretend your types are somehow different.
Another useful thing to keep in mind is that Go was explicitly engineered to be no-nonsense, no-frills, no-magic, in-your-face language, and you're just trying to add magic to it.
Please don't, those who like decyphering layers of magic do program in Scala different languages.
The venerated book Linux Driver Development says that
The flags argument passed to spin_unlock_irqrestore must be the same variable passed to spin_lock_irqsave. You must also call spin_lock_irqsave and spin_unlock_irqrestore in the same function; otherwise your code may break on some architectures.
Yet I can't find any such restriction required by the official documentation bundled with the kernel code itself. And I find driver code that violates this guidance.
Obviously it isn't a good idea to call spin_lock_irqsave and spin_unlock_irqrestore from separate functions, because you're supposed to minimize the work done while holding a lock (with interrupts disabled, no less!). But have changes to the kernel made it possible if done with care, was it never actually against the API contract, or is it still verboten to do so?
If the restriction has been removed at some point, did it apply to version 3.10.17?
This is just a guess, but the might be unclearly referring to a potential bug which could happen if you try to use a nonlocal variable or storage location for flags.
Basically, flags has to be private to the current execution context, which is why spin_lock_irqsave is a macro which takes the name of the flags. While flags is being saved, you don't have the spinlock yet.
How this is related to locking and unlocking in a different function:
Consider two functions that some driver developer might write:
void my_lock(my_object *ctx)
{
spin_lock_irqsave(&ctx->mylock, ctx->myflags); /* BUG */
}
void my_unlock(my_object *ctx)
{
spin_unlock_irqrestore(&ctx->mylock, ctx->myflags);
}
This is a bug because at the time ctx->myflags is written, the lock is not yet held, and it is a shared variable visible to other contexts and processors. The local flags must be saved to a private location on the stack. Then when the lock is owned, by the caller, a copy of the flags can be saved into the exclusively owned object. In other words, it can be fixed like this:
void my_lock(my_object *ctx)
{
unsigned long flags;
spin_lock_irqsave(&ctx->mylock, flag);
ctx->myflags = flags;
}
void my_unlock(my_object *ctx)
{
unsigned long flags = ctx->myflags; /* probably unnecessary */
spin_unlock_irqrestore(&ctx->mylock, flags);
}
If it couldn't be fixed like that, it would be very difficult to implement higher level primitives which need to wrap IRQ spinlocks.
How it could be arch-dependent:
Suppose that spin_lock_irqsave expands into machine code which saves the current flags in some register, then acquires the lock, and then saves that register into specified flags destination. In that case, the buggy code is actually safe. If the expanded code saves the flags into the actual flags object designated by the caller and then tries to acquire the lock, then it's broken.
I have never see that constraint aside from the book. Probably, given information in the book is just outdated, or .. simply wrong.
In the current kernel(and at least since 2.6.32, which I start to work with) actual locking is done through many level of nested calls from spin_lock_irqsave(see, e.g. __raw_spin_lock_irqsave, which is called in the middle). So different function's context for lock and unlock may hardly be a reason for misfunction.
I guess this is a really nasty issue - seems like one of the property destructors of my class creates deadlock. Property destructors are called automatically after class destructor. And I'd like to call them manually and make a log entry after every single one succeeds.
The problem only occurs on devices, where debugger can't be used, so I am using log instead.
Client::~Client() {
// Stops io service and disconnects sockets
exit();
LOG("io_service stopped"<<endl);
// Destroy IO service
io_.~io_service();
LOG("io_service destroyed"<<endl);
}
But the code above actually causes exception, because the ~io_service() gets called twice.
So is there a way to do this properly? If not, what's an alternative to debugging destructors?
You can't alter the compiler behaviour like that. the compiler will augment the destructor to destruct nested objects.
What you can do is to declare io as a pointer and allocate it dynamically with new. then call delete io and monitor what happens there.
Other solution is just to put a breakpoint on the io destructor and follow what happens there upon destruction. this is probably the best idea.