Highest thread priority and infinite loop without sleep in Windows - windows

I've been reading about thread priorities on MSDN and I created a test program that has two threads. One of the threads prints out some text and then sleeps while the other thread runs an infinite loop where it increments some number and does so without sleep. I set the latter thread to have a higher priority than the former and according to what I'm reading this should means that the former thread doesn't get any CPU-time.
But it does..
Why is this?
The first thread is created using:
HANDLE threadL = CreateThread(NULL, 0, threadLow, NULL, 0, &threadLiD);
and the other thread is just the main thread where I've put this command:
SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_HIGHEST);

Related

C# - Worker thread with slots, items being dynamically added

I have window service that polls a web service for new items every 30 seconds. If it finds any new items, it checks to see if they need to be "processed" and then puts them in a list to process. I spawn off different threads to process 5 at a time, and when one finishes, another one will fill the empty slot. Once everything has finished, the program sleeps for 30 seconds and then polls again.
My issue is, while the items are being processed(which could take up to 15 minutes), new items are being created which also may need to be processed. My problem is the main thread gets held up waiting for every last thread to finish before it sleeps and starts the process all over.
What I'm looking to do is have the main thread continue to poll the web service every 30 seconds, however instead of getting held up, add any new items it finds to a list, which would be processed in a separate worker thread. In that worker thread, it would still have say only 5 slots available, but they would essentially always all be filled, assuming the main thread continues to find new items to process.
I hope that makes sense. Thanks!
EDIT: updated code sample
I put together this as a worker thread that operates on a ConcurrentQueue. Any way to improve this?
private void ThreadWorker() {
DateTime dtStart = DateTime.Now;
int iNumOfConcurrentSlots = 6
Thread[] threads = new Thread[iNumOfConcurrentSlots];
while (true) {
for (int i = 0; i < m_iNumOfConcurrentSlots; i++) {
if (m_tAssetQueue.TryDequeue(out Asset aa)) {
threads[i] = new Thread(() => ProcessAsset(aa));
threads[i].Start();
Thread.Sleep(500);
}
}
}
}
EDIT: Ahh yeah that won't work above. I need a way of being able to not hard code the number of ConcurrentSlots, but have each thread basically waiting and looking for something in the Queue and if it finds it, process it. But then I also need a way of signalling that the ProcessAsset() function has completed to release the thread and allow another thread to be created....
One simple way to do it is to have 5 threads reading from a concurrent queue. The main thread queues items and the worker threads do blocking reads from the queue.
Note: The workers are in an infinite loop. They call TryDequeue, process the item if they got one or sleep one second if they fail to get something. They can also check for an exit flag.
To have your service property behaved, you might have an independent polling thread that queues the items. The main thread is kept to respond to start, stop, pause requests.
Pseudo code for worker thread:
While true
If TryDequeue then
process data
If exit flag is true, break
While pause flag, sleep
Sleep
Pseudo code for polling thread:
While true
Poll web service
Queue items in concurrent queue
If exit flag true, break
While pause flag, sleep
Sleep
Pseudo code for main thread:
Start polling thread
Start n worker threads with above code
Handle stop:
set exit flag to true
Handle pause
set pause flag to true

ruby multithreading - stop and resume specific thread

I want to be able to stop and run specific thread in ruby in the following context:
thread_hash = Hash.new()
loop do
Thread.start(call.function) do |execute|
operation = execute.extract(some_value_from_incoming_message)
if thread_hash.has_key? operation
thread_hash[operation].run
elsif !thread_hash.has_key?
thread_hash[operation] = Thread.current
do_something_else_1
Thread.stop
do_something_else_2
Thread.stop
do_something_else_3
thread_hash.delete(operation)
else
exit
end
end
end
In human language script above acts as a server which receives a message, extracts some parameter from the incoming message. If that parameter is already in the thread_hash, suspended thread should be resumed.
If the parameter is not present in the thread_hash, parameter along with thread id is stored in the thread_hash, some function is executed and current thread is suspended until resumed in the new loop and again until do_something_else_3 function is executed and operation serviced in the current thread is removed from hash.
Can thread be resumed in Ruby based on thread id or should new thread be given name during start like
thr = Thread.start
and can be resumed only by this name like:
thr.run
Is the solution described above realistic? Could it cause some sort of leak/deadlock due to old thread resumption in the new thread or redundant threads are automatically taken care of by Ruby?
It sounds to me like you're trying to do everything in every thread: read input, run existing threads, store new threads, delete old threads. Why not break up the problem?
hash = {}
loop do
operation = get_value_from message
if hash[operation] and hash[operation].alive?
hash[operation].wakeup
else
hash[operation] = Thread.new do
do_something1
Thread.stop
do_something2
Thread.stop
do_something3
end
end
end
Instead of wrapping the whole contents of the loop in a thread, only thread the message processing code. That lets it run in the background while the loop goes back to waiting for a message. This solves any sort of race/deadlock problem since all of the thread management occurs in the main thread.

Caller/Backtrace beyond a thread

As far as I know, it is possible to get only the portion of the caller/backtrace information that is within the current thread; anything prior to that (in the thread that created the current thread) is cut off. The following exemplifies this; the fact that a called b, which called c, which created the thread that called d, is cut off:
def a; b end
def b; c end
def c; Thread.new{d}.join end
def d; e end
def e; puts caller end
a
# => this_file:4:in `d'
# this_file:3:in `block in c'
What is the reason for this feature?
Is there a way to get the caller/backtrace information beyond the current thread?
I think I came up with my answer.
Things that can be done to a thread from outside of a thread is not only creating it. Other than creating, you can make wake up, etc. So it is not clear what operation should be attributed as part of the caller. For example, suppose there is a thread:
1: t = Thread.new{
2: Thread.stop
3: puts caller
4: }
5: t.wakeup
The thread t is created at line 1, but it goes into sleep by itself in line 2, then it wakes up by line 5. So, when we locate ourselves at line 3 caller, and consider the caller part outside of the thread, it is not clear whether Thread.new in line 1 should be part of it, or t.wakeup in line 5 should be part of it. Therefore, there is no clear notion callers beyond the current thread.
However, if we define a clear notion, then it is possible for caller beyond a thread to make sense. For example, always adding the callers up to the creation of the thread may make sense. Otherwise, adding the callers leading to the the most recent wakeup or creation may make sense. It is up to the definition.
The answer to both your questions is really the same. Consider a slightly more involved main thread. Instead of simply waiting for the spawned thread to end in c the main thread goes on calling other functions, perhaps even returning from c and going about it's business while the spawned thread goes on about it's business.
This means that the stack in the main thread has changed since the thread starting in d was spawned. In other words, by the time you call puts caller the stack in the main thread is no longer in the state it was when the secondary thread was created. There is no way to safely walk back up the stack beyond this point.
So in short:
The stack of the spawning thread will not remain in the state it was when the thread was spawned so walking back beyond the start of a threads own stack is not safe.
No, since the entire idea behind threads is that they are (pseudo) parallel, their stacks are completely unrelated.
Update:
As suggested in the comments, the stack of the current thread can be copied to the new thread at creation time. This would preserve the information that lead up to the thread being created, but the solution is not without its own set of problems.
Thread creation will be slower. That could be ok, if there was anything to gain from it, but in this case, is it?
What would it mean to return from the thread entry function?
It could return to the function that created the thread and keep running as if it was just a function call - only that it now runs in the second thread, not the original one. Would we want that?
There could be some magic that ensures that the thread terminates even if it's not at the top of the call stack. This would make the information in the call stack above the thread entry function incorrect anyways.
On systems with limits on the stacksize for each thread you could run into problems where the thread ran out of stack even if it's not using very much on it's own.
There probably other scenarios and peculiarities that could be thought out too, but the way threads are created with their own empty stack to start with makes the model both simple and predictable without leaving any useful information out of the callstack.

Ruby threads and mutex

Why does the following ruby code not work?
2 | require 'thread'
3 |
4 | $mutex = Mutex.new
5 | $mutex.lock
6 |
7 | t = Thread.new {
8 | sleep 10
9 | $mutex.unlock
10 | }
11 |
12 | $mutex.lock
13 | puts "Delayed hello"
When I'm running it, I get an error:
./test.rb:13:in `lock': thread 0x7f4557856378 tried to join itself (ThreadError)
from ./test.rb:13
What is the right way to synchronize two threads without joining them (both threads must continue running after synchronization)?
This is old but I'm contributing since it's a bit scary that none of the other answers (at time of writing) seem to be correct. The original code is clearly attempting to:
Create a mutex in the main thread and lock it.
Start a new thread, which may begin running at any time and after any delay subject to the whims of the Ruby runtime.
Have this thread unlock the mutex only once it's finished doing its work.
Have the main thread then deliberately re-lock the mutex, with the intention that it's spawned a thread which will unlock it. The main thread waits for that.
Then the main thread continues running.
#user2413915: Your solution omits the step of locking again in the main thread, so it won't wait for the spawned thread as intended.
#Paul Rubel: Your code assumes that the spawned thread gets as far as its lock of the mutex before the main thread does. This is a race condition. If the main thread continues to execute and locks first, the spawned thread will be blocked until after the main thread has printed "Delayed hello", which is the exact opposite of the desired outcome. You probably ran it by pasting into the IRB prompt; if you try with your example modified so that the end and Mutex lock are on the same line, it'll fail, printing the message too early (i.e. "end; $mutex.lock"). Either way, it's relying on behaviour of the Ruby runtime that's working by chance.
The original code should actually work fine in principle, albeit arguably lacking in elegance - in practice the Ruby 1.9+ runtime won't allow it as it "sees" two consecutive locks in the main thread without an unlock and doesn't "realise" that there's a spawned thread which is going to do the unlocking. Ruby (in this case technically erroneously) raises a ThreadError deadlock exception.
Instead, make cunning use of the ruby Queue. When you try to pull something off a Queue, the call will block until an item is available. So:
require 'thread'
require 'queue'
queue = Queue.new
t = Thread.new {
sleep 10
queue.push( nil ) # Push any object you like - here, it's a NilClass instance
}
queue.pop() # Blocks until thread 't' pushes onto the queue
puts "Delayed hello"
If the spawned thread runs first and pushes onto the queue, then the main thread will just pop the item and keep going. If the main thread tries to pop before the spawned thread pushes, it'll wait for the spawned thread.
[Edit: Note that the object pushed onto the queue could be the results of the spawned thread's processing task, so the main thread gets to wait until processing is complete and get the processing result in one go].
I've tested this on Ruby 1.8.7-p375 and Ruby 2.1.2 via rbenv with success, so it's reasonable to assume that the standard library Queue class is functional across all common major Ruby versions.
You do not need to call the mutex on line 12 again.
require 'thread'
$mutex = Mutex.new
$mutex.lock
t = Thread.new {
sleep 10
$mutex.unlock
}
puts "Delayed hello"
This will work.

Make parent thread wait till child thread finishes in VC

According to MSDN:
The WaitForSingleObject function can wait for the following objects:
Change notification
Console input
Event
Memory resource notification
Mutex
Process
Semaphore
Thread
Waitable timer
Then we can use WaitForSingleObject to make the parent-thread wait for child ones.
int main()
{
HANDLE h_child_thread = CreateThread(0,0, child, 0,0,0); //create a thread in VC
WaitForSingleObject(h_child_thread, INFINITE); //So, parent-thread will wait
return 0;
}
Question
Is there any other way to make parent-thread wait for child ones in VC or Windows?
I don't quite understand the usage of WaitForSingleObject here, does it mean that the thread's handle will be available when the thread terminates?
You can establish communication between threads in multiple ways and the terminating thread may somehow signal its waiting thread. It could be as simple as writing some special value to a shared memory location that the waiting thread can check. But this won't guarantee that the terminating thread has terminated when the waiting thread sees the special value (ordering/race conditions) or that the terminating thread terminates shortly after that (it can just hang or block on something) and it won't guarantee that the special value gets ever set before the terminating thread actually terminates (the thread can crash). WaitForSingleObject (and its companion WaitForMultipleObjects) is a sure way to know of a thread termination when it occurs. Just use it.
The handle will still be available in the sense that its value won't be gone. But it is practically useless after the thread has terminated, except you need this handle to get the thread exit code. And you still need to close the handle in the end. That is unless you're OK with handle/memory leaks.
for the first queation - yes. The method commonly used here is "Join". the usage is language dependant.
In .NET C++ you can use the Thread's Join method. this is from the msdn:
Thread* newThread = new Thread(new ThreadStart(0, Test::Work));
newThread->Start();
if(newThread->Join(waitTime + waitTime))
{
Console::WriteLine(S"New thread terminated.");
}
else
{
Console::WriteLine(S"Join timed out.");
}
Secondly, the thread is terminated when when you are signaled with "WaitForSingleObject" but the handle is still valid (for a terminated thread). So you still need to explicitly close the handle with CloseHandle.

Resources