communicate between two parallelly running Async Tasks - android-asynctask

I have two Aysnc tasks running parallally in my application from main thread: say A and B. I need to update / refresh the values in thread A when some values changes in thread B. How do I go about it?

I managed to solve this by using Handlers :D

Related

How to start a Jmeter thread from another thread?

I have Thread A that performs user login, setting some settings and creating some records. While working on the records page, there is a background thread, Thread B, that is being called every 15s, performs some sort of sycnronisation, but only when I'm on the records page.
What I managed to do is created a second thread that will fire these requests each 15s. Using a BeanShell PreProcessor I share the cookies between the two threads, so that the Http cookie manager in the second thread uses the same variables/values as the first one. My requests are working fine.
What I can't figure out is how to trigger Thread B when the Thread A has reached the step that involves records. One option is to delay Thread B for a certain amount of time, but this isn't very reliable as I can't know before hand how long it takes Thread A to finish creating the users.
Is there a way to trigger a thread from another thread?
Take a look at Inter-Thread Communication Plugin - it allows synchronizing actions between different threads (even if they are in different thread groups) basing on a simple FIFO queue.
The logic would be:
When you did some action by Thread A put something into the queue using fifoPut function
Thread B will be scanning the queue and if anything comes up it will get the value and start execution of its own actions.
Check out SynchronizationExample.jmx test plan for reference implementation.
You can install Inter-Thread Communication plugin using JMeter Plugins Manager
Also be aware that starting from JMeter 3.1 it's recommended to use JSR223 Test Elements and Groovy language for scripting so kindly avoid Beanshell going forward.

function posted to boost::asio::io_service::strand not executed

I am using boost::asio in a quite complex scenario and I am experiencing a problem where a method posted to a boost::asio::io_service::strand object is not executed although several worker threads on the io_service are running and idle.
As I said, the scenario is very complex, I'm still trying to develop a reasonably small repro scenario. But the conditions are as follows:
one io_service is running and has a work-object assigned to it
4 worker threads are assigned to the io_service (called io_service::run on each)
several strand objects are used to post numerous different tasks
in some of the tasks that are executed via the strands, new tasks are posted to the strand
The whole system works well and stable, except for one situation:
When calling the destructor of one of the classes, it posts the abort handler to the strand (to initiate aborting in sync with the other taks) and then waits until abort is done.
Every once in a while it now happens, that the abort handler is never executed (destructor is called from an invocation of another strand object).
I assume the problem is that the strand waits for executing the handler on the same thread that it has been posted. And since this thread is waiting for the abort handler to be executed the program deadlocks.
My questions now:
- is my assumption correct?
- is there any way to avoid this situation?
- how would you approach that problem (having several async tasks running and need to abort them synchronously)
Thanx a lot for your help!
m.

How is wait_for_completion different from wakeup_interruptible

How is wait_for_completion different from wakeup_interruptible?
Actually the question is how completion chains is different from wait queues ?
It looks the same concept to me
completion structure internally uses the wait queues and locks.
completion structure was introduced to address a very common occurring scenario, where multiple threads are waiting on some event. Once that event happens, you want only one of the waiting thread to start running.
The key here is that kernel developers don't have to implement and maintain the waiting queue , which makes life of a kernel developer easy.
Adding on Harman answer, I would also say that those two functions are called in different context: wakeup_interruptible() will wake up all threads waiting on a wait_queue, whereas wait_for_completion() will wait until a specific task completes. Those are two different things to me.

Ruby - Control child threads from main thread

The main program is creating a child thread. The child thread is running a loop and this thread needs to be paused and resumed based on events taking place in main thread.
What would the best way to accomplish this? IPC?
Communication between thread should be done using thread safe classes.
You can use Queue since it as a blocking method: pop.
If you want a more specific response you need to provide more details about your use case.

Is hadoop's job ThreadSafe?

Anyone knows if org.apache.hadoop.mapreduce.Job is thread-safe? In my application I create a thread for each job, and then waitForCompletion. And I have another monitor thread that checks every job's state with isComplete.
Is that safe? Are jobs thread-safe? Documentation doesn't seem to mention anything about it...
Thanks
Udi
Unlike the others, I also use threads to submit jobs in parallel and wait for their completion. You just have to use a job class instance per thread. If you share same job instances over multiple threads, you have to take care of the synchronization by yourself.
Why would you want to write a separate thread for each job? What exactly is your use case?
You can run multiple jobs in your Hadoop cluster. Do you have dependencies between the multiple jobs?
Suppose you have 10 jobs running. 1 job fails then would you need to re-run the 9 successful tasks.
Finally, job tracker will take care of scheduling multiple jobs on the Hadoop cluster. If you do not have dependencies then you should not be worried about thread safety. If you have dependencies then you may need to re-think your design.
Yes they are.. Actually the files is split in blocks and each block is executed on a separate node. all the map tasks run in parallel and then are fed to the the reducer after they are done. There is no question of synchronization as you would think about in multi threaded program. In multi threaded program all the threads are running on the same box and since they share some of the data you have to synchronize them
Just in case you need another kind of parallelism on the map task level, you should override run() method in your mapper and work with multiple threads there. Default implementation calls setup(), then map() times number of records to process, and finally it calls cleanup() method once.
Hope this helps someone!
If you are checking whether the jobs have finished I think you are a bit confused about how Map reduce works. You ought to be letting Hadoop do that for itself.

Resources