NSLock - should just block when locking a locked lock? - cocoa

I have a loop which starts with a
[lock lock];
because in the body of the loop I am creating another thread which needs to finish before the loop runs again. (The other thread will unlock it when finished).
However on the second loop I get the following error:
2011-02-02 07:15:05.032 BLA[21915:a0f] *** -[NSLock lock]: deadlock (<NSLock: 0x100401f30> '(null)')
2011-02-02 07:15:05.032 BLA[21915:a0f] *** Break on _NSLockError() to debug.
The "lock" documentation states the following:
Abstract: Attempts to acquire a lock, blocking a thread’s execution until the lock can be acquired. (required)
which makes me think it would just block until the lock could be acquired?

Sounds like two problems:
Locking a lock on one thread and unlocking on another is not supported – you probably want NSCondition. Wait on the NSCondition in the parent thread, and signal it in the child thread.
A normal NSLock can’t be locked while already locked. That’s what NSRecursiveLock is for.

Did you remember to send -unlock when you were done? Each call to -lock must be paired with a call to -unlock.

Related

LTTng/Perf: Difference between events used for exiting (sched_process_exit) and freeing (sched_process_free) a process

Currently, I'm getting into the topic of kernel tracing with LTTng and Perf. I'm especially interested to trace the different states a process is in.
I stumbled over the event sched_process_free and sched_process_exit. I'm wondering if my current understanding is correct:
If a process is exited, sched_process_exit is written to the trace. However, the process descriptor might still be in the memory which leads to a zombie. When the whole memory connected to the process is freed, sched_process_free is called. This would mean, if I really want to be sure that the process is fully "terminated" and removed from memory, I have to listen to sched_process_free instead of sched_process_exit in the trace. Is this correct?
I find some time to edit my answer to make it more clear. If there are still some problem, please tell me, we can discuss and make it more clear. Let's dive into the end of task :
there are two system calls : exit_group() and exit(), and all of them will go to do_exit(), which will do the following things.
set PF_EXTING which means the task is deleting
remove the task descriptor from timer by del_timer_sync()
call exit_mm(), exit_sem(), __exit_fs() and others to release structure of that task
call perf_event_exit_task(tsk);
decrease the ref count
set exit_code to _exit()/exit_group() or error
call exit_notify()
update relationship with parent and child
check exit_signal, send SIGCHLD
if task is not traced or return value is -1, set the exit_state to EXIT_DEAD, call release_task() to recycle other memory and decrease ref count.
if task is traced, set exit_state to EXIT_ZOMBIE
set task flag to PF_DEAD
call schedule()
We need zombie state cause the parent may need to use those file descriptors so we can not delete all the things in the first time. The parent task will need to use something like wait() to check if child is dead. After wait(), it is time for the zombie to release totally by release_task()
decrease the owners' task number
if the task is traced, delete from the ptrace_children list
call __exit_signal() delete all pending signals and release signal_struct descriptor and exit_itimers() delete all the timer
call __exit_sighand() delete signal handler
call __unhash_process()
nr_threads--
call detach_pid() to delete task descriptor from PIDTYPE_PID and PIDTYPE_TGID
call REMOVE_LINKS to delete the task from list
call sched_exit() to schedule parent's time pieces
call put_task-struct() to decrease the counter, and release memory & task descriptor
call delayed_put_task_struct()
So, we know that sched_process_exit state will be make in the do_exit(), but we can not make sure if the process is released or not (may call release_task() or not, which will trigger sched_process_free). That is why we need both of the two perf event point.

How does a mutex.Lock() know which variables to lock?

I'm a go-newbie, so please be gentle.
So I've been using mutexes in some of my code for a couple weeks now. I understand the concept behind it: lock access to a certain resource, interact with it (read or write), and then unlock it for others again.
The mutex code I use is mostly copy-paste-adjust. The code runs, but I'm still trying to wrap my head around it's internal working. Until now I've always used a mutex within a struct to lock the struct. Today I found this example though, which made it completely unclear for me what the mutex actually locks. Below is a piece of the example code:
var state = make(map[int]int)
var mutex = &sync.Mutex{}
var readOps uint64
var writeOps uint64
// Here we start 100 goroutines to execute repeated reads against the state, once per millisecond in each goroutine.
for r := 0; r < 100; r++ {
go func() {
total := 0
for {
key := rand.Intn(5)
mutex.Lock()
total += state[key]
mutex.Unlock()
atomic.AddUint64(&readOps, 1)
time.Sleep(time.Millisecond)
}
}()
}
What puzzles me here is that there doesn't seem to be any connection between the mutex and the value it is supposed to lock. Until today I thought that the mutex can lock a specific variable, but looking at this code it seems to somehow lock the whole program into doing only the lines below the lock, until the unlock is ran again. I suppose that means that all the other goroutines are paused for a moment until the unlock is ran again. Since the code is compiled I suppose it can know what variables are accessed between the lock() and the unlock(), but I'm not sure if that is the case.
If all other programs pause for a moment, it doesn't sound like real multi-processing, so I'm guessing I don't have a good understanding of what's going on.
Could anybody help me out in understanding how the computer knows which variables it should lock?
lock access to a certain resource, interact with it (read or write), and then unlock it for others again.
Basically yes.
What puzzles me here is that there doesn't seem to be any connection between the mutex and the value it is supposed to lock.
Mutex is just a mutual exclusion object that synchronizes access to a resource. That means, if two different goroutines want to lock the mutex, only the first can access it. The second goroutines now waits indefinitely until it can itself lock the mutex. There is no connection to variables whatsoever, you can use mutex however you want. For example only one http request, only one database read/write operation or only one variable assignment. While i don't advice the usage of mutex for those examples, the general idea should become clear.
but looking at this code it seems to somehow lock the whole program into doing only the lines below the lock, until the unlock is ran again.
Not the whole program, only every goroutine who wants to access the same mutex waits until it can.
I suppose that means that all the other goroutines are paused for a moment until the unlock is ran again.
No, they don't pause. They execute until they want to access the same mutex.
If you want to group your mutex specifically with a variable, why not create a struct?

Completion object race condition

What happens if complete_all() is called on a completion object (from task B) before the task A gets to do wait_for_completion() on the completion object? Is there some API to find if object is already completed at time of wait and return right away? One way could be using a mutex which is locked before sending the message and unlocked before the wait. That lock needs to be acquired before complete_all() and released after but wondering if there is a cleaner/better way. Any ideas are welcome.
More context: task A initializes the completion object, sends a request to task B along with the address of the completion object and then waits for the completion. Task B does some processing when it gets the message and then does complete_all() on the completion object.
If complete() or complete_all() is called before wait_for_completion() for a particular completion object, then wait_for_completion() will return immediately. A completion object is roughly like a semaphore:
Internally, a completion object has a done counter that is initialized to 0.
wait_for_completion() sleeps until done > 0 (or proceeds immediately if done is already greater than 0), and atomically decrements done before returning.
complete() increments done and wakes up the first process sleeping in wait_for_completion().
complete_all() sets done to UINT_MAX / 2 (effectively infinity) and wakes up everyone sleeping in wait_for_completion().
So if I'm understanding your question correctly, there is no need for additionaly locking; the completion object's internal wait.lock spinlock already synchronizes the counter access so that the case you're worrying about is handled correctly.

Make parent thread wait till child thread finishes in VC

According to MSDN:
The WaitForSingleObject function can wait for the following objects:
Change notification
Console input
Event
Memory resource notification
Mutex
Process
Semaphore
Thread
Waitable timer
Then we can use WaitForSingleObject to make the parent-thread wait for child ones.
int main()
{
HANDLE h_child_thread = CreateThread(0,0, child, 0,0,0); //create a thread in VC
WaitForSingleObject(h_child_thread, INFINITE); //So, parent-thread will wait
return 0;
}
Question
Is there any other way to make parent-thread wait for child ones in VC or Windows?
I don't quite understand the usage of WaitForSingleObject here, does it mean that the thread's handle will be available when the thread terminates?
You can establish communication between threads in multiple ways and the terminating thread may somehow signal its waiting thread. It could be as simple as writing some special value to a shared memory location that the waiting thread can check. But this won't guarantee that the terminating thread has terminated when the waiting thread sees the special value (ordering/race conditions) or that the terminating thread terminates shortly after that (it can just hang or block on something) and it won't guarantee that the special value gets ever set before the terminating thread actually terminates (the thread can crash). WaitForSingleObject (and its companion WaitForMultipleObjects) is a sure way to know of a thread termination when it occurs. Just use it.
The handle will still be available in the sense that its value won't be gone. But it is practically useless after the thread has terminated, except you need this handle to get the thread exit code. And you still need to close the handle in the end. That is unless you're OK with handle/memory leaks.
for the first queation - yes. The method commonly used here is "Join". the usage is language dependant.
In .NET C++ you can use the Thread's Join method. this is from the msdn:
Thread* newThread = new Thread(new ThreadStart(0, Test::Work));
newThread->Start();
if(newThread->Join(waitTime + waitTime))
{
Console::WriteLine(S"New thread terminated.");
}
else
{
Console::WriteLine(S"Join timed out.");
}
Secondly, the thread is terminated when when you are signaled with "WaitForSingleObject" but the handle is still valid (for a terminated thread). So you still need to explicitly close the handle with CloseHandle.

Synchronization primitive with IO/Kit

I'm looking for a wait/signal synchronization primitive in IO/Kit working like :
Thread1 : wait(myEvent) // Blocking thread1
Thread2 : wait(myEvent) // Blocking thread2
Thread3 : signal(myEvent) // Release one of thread1 or thread2
This can't be done using an IOLock since the lock/unlock operations would be made from different threads, which is a bad idea according to some doc I've read.
Thread1, 2, 3 can be user threads or kernel threads.
I'd also like to have an optional time out with the wait operation.
Thanks for your help !
You want the function IOLockSleepDeadline(), declared in <IOKit/IOLocks.h>.
You set up a single IOLock somewhere with IOLockAlloc() before you begin. Then, threads 1 and 2 lock the IOLock with IOLockLock() and immediately relinquish the lock and go to sleep by calling IOLockSleepDeadline(). When thread 3 is ready, it calls IOLockWakeup() (with oneThread = true if you only want to wake a single thread). This causes thread 1 or 2 to wake up and immediately acquire the lock (so they need to Unlock or sleep again).
IOLockSleep() works similarly, but without the timeout.
You can do something similar using the IOCommandGate's commandSleep() method which may be more appropriate if your driver already is centred around an IOWorkLoop.
The documentation of method IOLocks::IOLockLock states the following:
Lock the mutex. If the lock is held by any thread, block waiting for
its unlock. This function may block and so should not be called from
interrupt level or while a spin lock is held. Locking the mutex
recursively from one thread will result in deadlock.
So it will certainly do block the other threads (T1 and T2) until the thread holding the lock releases it (T3). One thing that it doesn't seem to support is the timeout.

Resources