I need to run multiple threads on an embedded-linux target.
One of the threads requires a lot of resources so I need it to run in background at a low priority.
There will be times when the higher priority threads will have nothing to do. A typical vala Thread.create looks like this:
Thread.create<void*> (pProcessor->run, true);
Is there a way to specify the thread priority?
You can't use the threading stuff in GLib, you would have to use pthreads directly. There is some information on how to do that in C here. You would also need to create Vala bindings for the relevant functions since nobody has done so yet (it's pretty easy... if you understand how Vala maps to C it would only take a couple minutes).
If I were you I would look into using a priority queue instead. If you don't feel like writing your own bump should already have everything you need (specifically, Semaphore and/or TaskQueue), or AsyncPriorityQueue if you would prefer to work at a lower level.
Related
1.How to understand asynchronous io in Windows??
2.If I write/read something to the file using asynchronous io :
WriteFile();
ReadFile();
WriteFile();
How many threads does the OS generate to accomplish these task?
Do the 3 task run simultaneously and in multi-threading way
or run one after another just with different order?
3.Can I use multithreading and in each thread using a asynchronous io
to read or write the same file?
1.How to understand asynchronous io in Windows??
Read the Win32 documentation. Search on the web. Don't expect an answer to such a large, broad question here in SO.
2.If I write/read something to the file using asynchronous io :
WriteFile();
ReadFile();
WriteFile();
How many threads does the OS generate to accomplish these task?
I don't think it does. It will re-use existing thread contexts to execute kernel function calls. Basically the OS schedules the work and borrows a thread to do it - which is fine, since the kernel context is always the same.
3.Can I use multithreading and in each thread using a asynchronous io to read or write
the same file?
I believe so, yes. I don't know that the order of execution is guaranteed to match the order of submission, in which case you will obtain unpredictable results if you issue concurrent reads/writes on the same byte ranges.
To your questions:
How many threads does the OS generate
to accomplish these task?
Depends if you are using the windows pools, iocp, etc. Generally you decide.
Do the 3 task run simultaneously and
in multi-threading way or run one
after another just with different
order?
This depends on your architecture. On a single-cored machine, the 3 tasks would run one after another and the order would be os decided. On a multi-cored machine these might run together, depending on how the OS scheduled the threads.
3.Can I use multithreading and in each thread using a asynchronous io to read
or write the same file?
That is out of my knowledge so someone else would need to answer that one.
I suggest getting a copy of Windows via C/C++ as that has a very large chapter on Asynchronous IO.
I guess it depends which operating system you are using. But you shouldnt have to worry about this anyhow, it is transparent and should not affect how you write your code.
If you use the standard read and write in windows, you don't have to care that the system may not write it immediately, unless you are writing on the command-line and are waiting for the user to type some input. The OS is responsible for ensuring that what you write will eventually be written to the hard drive, and will do a much better job that you can do anyway.
If you are working on some weird asynchronous io, then please reformat your question.
I suggest looking for Jeffery Richter's books on Win32 programming. They are very well-written guides for just this sort of thing.
I think he has a newer book(s?) on C#, so watch out that you don't buy the wrong one.
I need a lock in cocoa that does not use one cpu when I try to lock it and it is locked somewhere else. Something that is implemented in the kernel scheduler.
It sounds like you're trying to find a lock that's not a spin lock. EVERY lock must use some CPU, or else it couldn't function. :-)
NSLock is the most obvious in Cocoa. It has a simple -lock, -unlock interface and uses pthread mutexes in its implementation. There are a number of more sophisticated locks in Cocoa for more specific needs: NSRecursiveLock, NSCondition, NSDistributedLock, etc.
There is also the #synchronized directive which is even simpler to use but has some additional overhead to it.
GCD also has a counted semaphore object if you're looking for something like that.
My recommendation is that, instead of locks, you look at using NSOperations and an NSOperationQueue where you -setMaxConcurrentOperationCount: to 1 to access the shared resource. By using a single-wide operation queue, you can guarantee that only one thing at a time will make use of a resource, while still allowing for multiple threads to do so.
This avoids the need for locks, and since everything is done in user space, can provide much better performance. I've replaced almost all of my locking around shared resources with this technique, and have been very pleased with the results.
Do you mean "lock" as in a mutex between threads, or a mutex between processes, or a mutex between disparate resources on a network, or...?
If it's between threads, you use NSLock. If it's between processes, then you can use POSIX named semaphores.
If you really want kernel locks and know what you are doing, you can use
<libkern/OSAtomic.h>
Be sure to always use the "barrier" variants. These are faster and much more dangerous than posix locks. If you can target 10.6 with new code, then GCD is a great way to go. There is a great podcast on using the kernel synchronization primitives at: http://www.mac-developer-network.com/shows/podcasts/lnc/lnc032/
I need to do some network bound calls (e.g., fetch a website) and I don't want it to block the UI. Should I be using NSThread's or python's threading module if I am working in pyobjc? I can't find any information on how to choose one over the other. Note, I don't really care about Python's GIL since my tasks are not CPU bound at all.
It will make no difference, you will gain the same behavior with slightly different interfaces. Use whichever fits best into your system.
Learn to love the run loop. Use Cocoa's URL-loading system (or, if you need plain sockets, NSFileHandle) and let it call you when the response (or failure) comes back. Then you don't have to deal with threads at all (the URL-loading system will use a thread for you).
Pretty much the only time to create your own threads in Cocoa is when you have a large task (>0.1 sec) that you can't break up.
(Someone might say NSOperation, but NSOperationQueue is broken and RAOperationQueue doesn't support concurrent operations. Fine if you already have a bunch of NSOperationQueue code or really want to prepare for working NSOperationQueue, but if you need concurrency now, run loop or threads.)
I'm more fond of the native python threading solution since I could join and reference threads around. AFAIK, NSThreads don't support thread joining and cancelling, and you could get a variety of things done with python threads.
Also, it's a bummer that NSThreads can't have multiple arguments, and though there are workarounds for this (like using NSDictionarys and NSArrays), it's still not as elegant and as simple as invoking a thread with arguments laid out in order / corresponding parameters.
But yeah, if the situation demands you to use NSThreads, there shouldn't be any problem at all. Otherwise, it's cool to stick with native python threads.
I have a different suggestion, mainly because python threading is just plain awful because of the GIL (Global Interpreter Lock), especially when you have more than one cpu core. There is a video presentation that goes into this in excruciating detail, but I cannot find the video right now - it was done by a Google employee.
Anyway, you may want to think about using the subprocess module instead of threading (have a helper program that you can execute, or use another binary on the system. Or use NSThread, it should give you more performance than what you can get with CPython threads.
I am developing a multi-threaded application in Cocoa. The main thread takes values from the user, and when a button is clicked I invoke a secondary thread in which a long calculation takes place. Now from this thread I have to return the output of every step of the calculation to the main thread. I want to periodically send data from one thread to the other. I can't find any simple example that does this. Any ideas?
There are a number of ways to do this, in rough order of complexity (easiest first):
use NSObject's performSelectorOnMainThread:withObject:waitUntilDone: which is pretty self explanatory.
use performSelector:onThread:withObject:waitUntilDone:, which will let you go the other way
use an NSNotification (and NSDistributedNotificationCenter), though you can easily run into a race condition if you're not careful
Use NSPorts to send data back and forth
Check out the doc that Abizer mentioned for details on all of these.
performSelectorOnMainThread:withObject:waitUntilDone: is often the easiest way to update the UI with a background thread's progress. You could also create your own storage area that's safe to access between threads using NSLock or a similar mechanism, or even use distributed objects (which also works between processes or over a network).
Then there's NSOperationQueue and NSOperation which does help a lot to simplify multi-threaded programming, although a lot of programmers have been avoiding it since it can cause a crash in certain circumstances under Leopard.
Have a look at the Apple docs for this.
You may need to create an ADC member account, but this is free
Multi-threaded Cocoa Programs
I am writing a simple memory game using ruby + qt (trying to get away from c++ for while...)
In order to allow a X second timeout to view two open pieces, I need either timers or do the work in a background thread.
What is the simplest way of implementing this without reinventing the wheel?
Ruby threads? Qt threads? Qt timers?
I dont know if it is the best solution but:
block=Proc.new{ Thread.pass }
timer=Qt::Timer.new(window)
invoke=Qt::BlockInvocation.new(timer, block, "invoke()")
Qt::Object.connect(timer, SIGNAL("timeout()"), invoke, SLOT("invoke()"))
timer.start(1)
Makes ruby threads work! Adjust start(x) for your needs.
The decision to choose QT threads/timers or Ruby ones is probably a personal one, but you should remember that Ruby threads are green. This means that they are implemented by the Ruby interpreter and cannot scale across multiple processor cores. Though, for a simple memory game with a timer I'm guessing you probably don't need to worry about that.
Although somewhat unrelated, Midiator, a Ruby interface to MIDI devices uses Ruby threads to implement a timer.
Also, have a look at Leslie Viljoen's article, he says that Ruby's threads lock up when QT form widgets are waiting for input. He also provides some sample code to implement QT timers (which look quite easy and appropiate for what you are doing).
Thanks.
Solved it using QTimer::singleShot.
Sufficient - in my case, fires a one time timer every time two tiles are displayed.