I have next implementation of RecursiveAction, single purpose of this class - is to print from 0 to 9, but from different threads, if possible:
public class MyRecursiveAction extends RecursiveAction {
private final int num;
public MyRecursiveAction(int num) {
this.num = num;
}
#Override
protected void compute() {
if (num < 10) {
System.out.println(num);
new MyRecursiveAction(num + 1).fork();
}
}
}
And I thought that invoking awaitQuiescence will make current thread to wait until all tasks (submitted and forked) will be completed:
public class Main {
public static void main(String[] args) {
ForkJoinPool forkJoinPool = new ForkJoinPool();
forkJoinPool.execute(new MyRecursiveAction(0));
System.out.println(forkJoinPool.awaitQuiescence(5, TimeUnit.SECONDS) ? "tasks" : "time");
}
}
But I don't always get correct result, instead of printing 10 times, prints from 0 to 10 times.
But if I add helpQuiesce to my implementation of RecursiveAction:
public class MyRecursiveAction extends RecursiveAction {
private final int num;
public MyRecursiveAction(int num) {
this.num = num;
}
#Override
protected void compute() {
if (num < 10) {
System.out.println(num);
new MyRecursiveAction(num + 1).fork();
}
RecursiveAction.helpQuiesce();//here
}
}
Everything works fine.
I want to know for what actually awaitQuiescence waiting?
You get an idea of what happens when you change the System.out.println(num); to System.out.println(num + " " + Thread.currentThread());
This may print something like:
0 Thread[ForkJoinPool-1-worker-3,5,main]
1 Thread[main,5,main]
tasks
2 Thread[ForkJoinPool.commonPool-worker-3,5,main]
When awaitQuiescence detects that there are pending tasks, it helps out by stealing one and executing it directly. Its documentation says:
If called by a ForkJoinTask operating in this pool, equivalent in effect to ForkJoinTask.helpQuiesce(). Otherwise, waits and/or attempts to assist performing tasks until this pool isQuiescent() or the indicated timeout elapses.
Emphasis added by me
This happens here, as we can see, a task prints “main” as its executing thread. Then, the behavior of fork() is specified as:
Arranges to asynchronously execute this task in the pool the current task is running in, if applicable, or using the ForkJoinPool.commonPool() if not inForkJoinPool().
Since the main thread is not a worker thread of a ForkJoinPool, the fork() will submit the new task to the commonPool(). From that point on, the fork() invoked from a common pool’s worker thread will submit the next task to the common pool too. But awaitQuiescence invoked on the custom pool doesn’t wait for the completion of the common pool’s tasks and the JVM terminates too early.
If you’re going to say that this is a flawed API design, I wouldn’t object.
The solution is not to use awaitQuiescence for anything but the common pool¹. Normally, a RecursiveAction that splits off sub tasks should wait for their completion. Then, you can wait for the root task’s completion to wait for the completion of all associated tasks.
The second half of this answer contains an example of such a RecursiveAction implementation.
¹ awaitQuiescence is useful when you don’t have hands on the actual futures, like with a parallel stream that submits to the common pool.
Everything works fine.
No it does not, you got lucky that it worked when you inserted:
RecursiveAction.helpQuiesce();
To explain this let's slightly change your example a bit:
static class MyRecursiveAction extends RecursiveAction {
private final int num;
public MyRecursiveAction(int num) {
this.num = num;
}
#Override
protected void compute() {
if (num < 10) {
System.out.println(num);
new MyRecursiveAction(num + 1).fork();
}
}
}
public static void main(String[] args) {
ForkJoinPool forkJoinPool = new ForkJoinPool();
forkJoinPool.execute(new MyRecursiveAction(0));
LockSupport.parkNanos(TimeUnit.SECONDS.toNanos(2));
}
If you run this, you will notice that you get the result you expect to get. And there are two main reasons for this. First, fork method will execute the task in the common pool as the other answer already explained. And second, is that threads in the common pool are daemon threads. JVM is not waiting for them to finish before exiting, it exists early. So if that is the case, you might ask why it works. It does because of this line:
LockSupport.parkNanos(TimeUnit.SECONDS.toNanos(2));
which makes the main thread (which is a non daemon thread) sleep for two seconds, giving enough time for the ForkJoinPool to execute your task.
Now let's change the code closer to your example:
public static void main(String[] args) {
ForkJoinPool forkJoinPool = new ForkJoinPool();
forkJoinPool.execute(new MyRecursiveAction(0));
System.out.println(forkJoinPool.awaitQuiescence(5, TimeUnit.SECONDS) ? "tasks" : "time");
}
specifically, you use: forkJoinPool.awaitQuiescence(...), which is documented as:
Otherwise, waits and/or attempts to assist performing tasks...
It does not say that it will necessarily wait, it says it will "wait and/or attempt ...", in this case it is more or, than and. As such, it will attempt to help, but still it will not wait for all the tasks to finish. Is this weird or even stupid?
When you insert RecursiveAction.helpQuiesce(); you are eventually calling the same awaitQuiescence (with different arguments) under the hood - so essentially nothing changes; the fundamental problem is still there:
static ForkJoinPool forkJoinPool = new ForkJoinPool();
static AtomicInteger res = new AtomicInteger(0);
public static void main(String[] args) {
forkJoinPool.execute(new MyRecursiveAction(0));
System.out.println(forkJoinPool.awaitQuiescence(5, TimeUnit.SECONDS) ? "tasks" : "time");
System.out.println(res.get());
}
static class MyRecursiveAction extends RecursiveAction {
private final int num;
public MyRecursiveAction(int num) {
this.num = num;
}
#Override
protected void compute() {
if (num < 10_000) {
res.incrementAndGet();
System.out.println(num + " thread : " + Thread.currentThread().getName());
new MyRecursiveAction(num + 1).fork();
}
RecursiveAction.helpQuiesce();
}
}
When I run this, it never printed 10000, showing that the insertions of that line changes nothing.
The usual default way to handle such things is to fork then join. And one more join in the caller, on the ForkJoinTask that you get back when calling submit. Something like:
public static void main(String[] args) {
ForkJoinPool forkJoinPool = new ForkJoinPool(2);
ForkJoinTask<Void> task = forkJoinPool.submit(new MyRecursiveAction(0));
task.join();
}
static class MyRecursiveAction extends RecursiveAction {
private final int num;
public MyRecursiveAction(int num) {
this.num = num;
}
#Override
protected void compute() {
if (num < 10) {
System.out.println(num);
MyRecursiveAction ac = new MyRecursiveAction(num + 1);
ac.fork();
ac.join();
}
}
}
Related
How to avoid manual sleep in unit test.
Suppose In below code the Process and notify takes around 5 seconds for processing. So in order to complete the processing, i have added sleep of 5 seconds.
public class ClassToTest {
public ProcessService processService;
public NotificationService notificationService;
public ClassToTest(ProcessService pService ,NotificationService nService ) {
this.notificationService=nService;
this.processService = pService;
}
public CompletableFuture<Void> testMethod()
{
return CompletableFuture.supplyAsync(processService::process)
.thenAccept(notificationService::notify);
}
}
is there any better way to handle this ?
#Test
public void comletableFutureThenAccept() {
CompletableFuture<Void> thenAccept =
sleep(6);
assertTrue(thenAccept.isDone());
verify(mocknotificationService, times(1)).notify(Mockito.anystring());
}
Normally, you want to test whether an underlying operation completes with the intended result, has the intended side effect, or at least completes without throwing an exception. This can be achieved as easy as
#Test
public void comletableFutureThenAccept() {
CompletableFuture<Void> future = someMethod();
future.join();
/* check for class under test to have the desired state */
}
join() will wait for the completion and return the result (which you can ignore in case of Void), throwing an exception if the future completed exceptionally.
If completing withing a certain time is actually part of the test, simply use
#Test(timeout = 5000)
public void comletableFutureThenAccept() {
CompletableFuture<Void> future = someMethod();
future.join();
/* check for class under test to have the desired state */
}
In the unlikely case that you truly want to test for completion within the specified time only, i.e. do not care whether the operation threw an exception, you can use
#Test(timeout = 5000)
public void comletableFutureThenAccept() {
CompletableFuture<Void> future = someMethod();
future.exceptionally(t -> null).join();
}
This substitutes an exceptional completion with a null result, hence, join() won’t throw an exception. So only the timeout remains.
Java 9 allows another alternative, not using JUnit’s timeout.
#Test()
public void comletableFutureThenAccept() {
CompletableFuture<Void> future = someMethod().orTimeout(5, TimeUnit.SECONDS);
future.join();
/* check for class under test to have the desired state */
}
This has the advantage of not failing if the operation completes in time but the subsequent verification takes longer.
I wish to batch and process items as they come along so i created a UnicastProcessor and subscribed to it like this
UnicastProcessor<String> processor = UnicastProcessor.create()
processor
.bufferTimeout(10, Duration.ofMillis(500))
.subscribe(new Subscriber<List<String>>() {
#Override
public void onSubscribe(Subscription subscription) {
System.out.println("OnSubscribe");
}
#Override
public void onNext(List<String> strings) {
System.out.println("OnNext");
}
#Override
public void onError(Throwable throwable) {
System.out.println("OnError");
}
#Override
public void onComplete() {
System.out.println("OnComplete");
}
});
And then for testing purposes i created a new thread and started adding items in a loop
new Thread(() -> {
int limit = 100
i = 0
while(i < limit) {
++i
processor.sink().next("Hello $i")
}
System.out.println("Published all")
}).start()
After running this (and letting the main thread sleep for 5 seconds) i can see that all item have been published, but the subscriber does not trigger on any of the events so i can't process any of the published items.
What am I doing wrong here?
Reactive Streams specification is the answer!
The total number of onNext´s signalled by a Publisher to a Subscriber
MUST be less than or equal to the total number of elements requested
by that Subscriber´s Subscription at all times. [Rule 1.1]
In your example, you just simply provide a subscriber who does nothing in any sense. In turn, Reactive Streams specification, directly says that nothing will happen (there will be no onNext invocation) if you have not called Subscription#request method
A Subscriber MUST signal demand via Subscription.request(long n) to
receive onNext signals. [Rule 2.1]
Thus, to fix your problem, one of the possible solutions is changing the code in the following way:
UnicastProcessor<String> processor = UnicastProcessor.create()
processor
.bufferTimeout(10, Duration.ofMillis(500))
.subscribe(new Subscriber<List<String>>() {
#Override
public void onSubscribe(Subscription subscription) {
System.out.println("OnSubscribe");
subscription.request(Long.MAX_VALUE);
}
#Override
public void onNext(List<String> strings) {
System.out.println("OnNext");
}
#Override
public void onError(Throwable throwable) {
System.out.println("OnError");
}
#Override
public void onComplete() {
System.out.println("OnComplete");
}
});
Note, in this example demand in size Long.MAX_VALUE means an unbounded demand so that all messages will be directly pushed to the given Subscriber [Rule 3.17]
Use UnicatProcessor correctly
On the one hand, your example will work correctly with mentioned fixes. However, on the other hand, each invocation of FluxProcessor#sink() (yeah sink is FluxProcessor's method) will lead to a redundant calling of UnicastProcessor's onSubscribe method, which under the hood cause a few atomic reads and writes which might be avoided if create FluxSink once and safely use it as many tame as needed. For example:
UnicastProcessor<String> processor = UnicastProcessor.create()
FluxSink<String> sink = processor.serialize().sink();
...
new Thread(() -> {
int limit = 100
i = 0
while(i < limit) {
++i
sink.next("Hello $i")
}
System.out.println("Published all")
}).start()
Note, in this example, I executed an additional method serialize which provide thread-safe sink and ensure that the calling of FluxSink#next concurrently will not cause a violation of the ReactiveStreams spec.
I am using PostSharp to log performance and other statistics on some methods. I was asked to measure the performance and time taken on some sub tasks, such as calling an external web service, or a large database, etc.
For example, I have a method with the AoPLoggingAttribute applied. AoPLoggingAttribute inherits from OnMethodBoundaryAspect, so it supports all know methods (OnEntry, OnExit, OnSuccess, etc)
[AoPLogging]
public MyClass[] MyMainMethod(string myid)
{
//Some code here
LongExecutingTask();
//Rest of the code here
}
What is the best approach to measure the time taken by LongExecutingTask ? I don't care if it's part of the total executing time, but somehow I need to know the time taken from this task.
If you want to use postsharp you could make a timer aspect like this
public class TimingAttribute : OnMethodBoundaryAspect
{
Stopwatch timer = new Stopwatch();
public override void OnEntry(MethodExecutionArgs args)
{
timer.Reset();
timer.Start();
base.OnEntry(args);
}
public override void OnExit(MethodExecutionArgs args)
{
timer.Stop();
Console.WriteLine("Execution took {0} milli-seconds", timer.ElapsedMilliseconds);
base.OnExit(args);
}
}
Now just attach the aspect to the method you want to time
[Timing]
public void LongExecutingTask() {}
Remember that postsharp, or AOP in general, works by attaching to the method being called. Not by adding code insert your main method (or whereever you are calling the methods)
Update: If you really want to keep track of the whole callstack you could do something like this
public class TimingAttribute : OnMethodBoundaryAspect
{
static List<Stopwatch> callstack = new List<Stopwatch>();
static int callstackDepth = 0;
public override void OnEntry(MethodExecutionArgs args)
{
var timer = new Stopwatch();
timer.Start();
callstack.Add(timer);
++callstackDepth;
base.OnEntry(args);
}
public override void OnExit(MethodExecutionArgs args)
{
--callstackDepth;
var timer = callstack[callstackDepth];
timer.Stop();
if (callstackDepth == 0) {
//Add code to print out all the results
Console.WriteLine("Execution took {0} milli-seconds", timer.ElapsedMilliseconds);
callstack.Clear();
}
base.OnExit(args);
}
}
Now this only works with 1 single callstack. If you were to have 2 LongExecutingTasks in your main method you would have to think about how you want to report over thoes. But maybe this gives you an idea how you could keep track of the whole callstack.
You must assign your timer to the MethodExecutionArgs in order to get accurate results in a multi-threaded environment. PostSharp internally assigns to a static class, so any members risk being overwritten by concurrent invocations.
public class TimingAttribute : OnMethodBoundaryAspect
{
public override void OnEntry(MethodExecutionArgs args)
{
args.MethodExecutionTag = Stopwatch.StartNew();
}
public override void OnExit(MethodExecutionArgs args)
{
var sw = (Stopwatch)args.MethodExecutionTag;
sw.Stop();
System.Diagnostics.Debug.WriteLine("{0} executed in {1} seconds", args.Method.Name,
sw.ElapsedMilliseconds / 1000);
}
}
In the viewpoint of running code in the UI thread, is there any difference between:
MainActivity.this.runOnUiThread(new Runnable() {
public void run() {
Log.d("UI thread", "I am the UI thread");
}
});
or
MainActivity.this.myView.post(new Runnable() {
public void run() {
Log.d("UI thread", "I am the UI thread");
}
});
and
private class BackgroundTask extends AsyncTask<String, Void, Bitmap> {
protected void onPostExecute(Bitmap result) {
Log.d("UI thread", "I am the UI thread");
}
}
None of those are precisely the same, though they will all have the same net effect.
The difference between the first and the second is that if you happen to be on the main application thread when executing the code, the first one (runOnUiThread()) will execute the Runnable immediately. The second one (post()) always puts the Runnable at the end of the event queue, even if you are already on the main application thread.
The third one, assuming you create and execute an instance of BackgroundTask, will waste a lot of time grabbing a thread out of the thread pool, to execute a default no-op doInBackground(), before eventually doing what amounts to a post(). This is by far the least efficient of the three. Use AsyncTask if you actually have work to do in a background thread, not just for the use of onPostExecute().
I like the one from HPP comment, it can be used anywhere without any parameter:
new Handler(Looper.getMainLooper()).post(new Runnable() {
#Override
public void run() {
Log.d("UI thread", "I am the UI thread");
}
});
There is a fourth way using Handler
new Handler().post(new Runnable() {
#Override
public void run() {
// Code here will run in UI thread
}
});
The answer by Pomber is acceptable, however I'm not a big fan of creating new objects repeatedly. The best solutions are always the ones that try to mitigate memory hog. Yes, there is auto garbage collection but memory conservation in a mobile device falls within the confines of best practice.
The code below updates a TextView in a service.
TextViewUpdater textViewUpdater = new TextViewUpdater();
Handler textViewUpdaterHandler = new Handler(Looper.getMainLooper());
private class TextViewUpdater implements Runnable{
private String txt;
#Override
public void run() {
searchResultTextView.setText(txt);
}
public void setText(String txt){
this.txt = txt;
}
}
It can be used from anywhere like this:
textViewUpdater.setText("Hello");
textViewUpdaterHandler.post(textViewUpdater);
As of Android P you can use getMainExecutor():
getMainExecutor().execute(new Runnable() {
#Override public void run() {
// Code will run on the main thread
}
});
From the Android developer docs:
Return an Executor that will run enqueued tasks on the main thread associated with this context. This is the thread used to dispatch calls to application components (activities, services, etc).
From the CommonsBlog:
You can call getMainExecutor() on Context to get an Executor that will execute its jobs on the main application thread. There are other ways of accomplishing this, using Looper and a custom Executor implementation, but this is simpler.
If you need to use in Fragment you should use
private Context context;
#Override
public void onAttach(Context context) {
super.onAttach(context);
this.context = context;
}
((MainActivity)context).runOnUiThread(new Runnable() {
public void run() {
Log.d("UI thread", "I am the UI thread");
}
});
instead of
getActivity().runOnUiThread(new Runnable() {
public void run() {
Log.d("UI thread", "I am the UI thread");
}
});
Because There will be null pointer exception in some situation like pager fragment
use Handler
new Handler(Looper.getMainLooper()).post(new Runnable() {
#Override
public void run() {
// Code here will run in UI thread
}
});
Kotlin version:
Handler(Looper.getMainLooper()).post {
Toast.makeText(context, "Running on UI(Main) thread.", Toast.LENGTH_LONG).show()
}
Or if you are using Kotlin coroutines:
inside coroutine scope add this:
withContext(Dispatchers.Main) {
Toast.makeText(context, "Running on UI(Main) thread.", Toast.LENGTH_LONG).show()
}
I've read several questions similar to this, but none of the answers provide ideas of how to clean up memory while still maintaining lock integrity. I'm estimating the number of key-value pairs at a given time to be in the tens of thousands, but the number of key-value pairs over the lifespan of the data structure is virtually infinite (realistically it probably wouldn't be more than a billion, but I'm coding to the worst case).
I have an interface:
public interface KeyLock<K extends Comparable<? super K>> {
public void lock(K key);
public void unock(K key);
}
with a default implementation:
public class DefaultKeyLock<K extends Comparable<? super K>> implements KeyLock<K> {
private final ConcurrentMap<K, Mutex> lockMap;
public DefaultKeyLock() {
lockMap = new ConcurrentSkipListMap<K, Mutex>();
}
#Override
public void lock(K key) {
Mutex mutex = new Mutex();
Mutex existingMutex = lockMap.putIfAbsent(key, mutex);
if (existingMutex != null) {
mutex = existingMutex;
}
mutex.lock();
}
#Override
public void unock(K key) {
Mutex mutex = lockMap.get(key);
mutex.unlock();
}
}
This works nicely, but the map never gets cleaned up. What I have so far for a clean implementation is:
public class CleanKeyLock<K extends Comparable<? super K>> implements KeyLock<K> {
private final ConcurrentMap<K, LockWrapper> lockMap;
public CleanKeyLock() {
lockMap = new ConcurrentSkipListMap<K, LockWrapper>();
}
#Override
public void lock(K key) {
LockWrapper wrapper = new LockWrapper(key);
wrapper.addReference();
LockWrapper existingWrapper = lockMap.putIfAbsent(key, wrapper);
if (existingWrapper != null) {
wrapper = existingWrapper;
wrapper.addReference();
}
wrapper.addReference();
wrapper.lock();
}
#Override
public void unock(K key) {
LockWrapper wrapper = lockMap.get(key);
if (wrapper != null) {
wrapper.unlock();
wrapper.removeReference();
}
}
private class LockWrapper {
private final K key;
private final ReentrantLock lock;
private int referenceCount;
public LockWrapper(K key) {
this.key = key;
lock = new ReentrantLock();
referenceCount = 0;
}
public synchronized void addReference() {
lockMap.put(key, this);
referenceCount++;
}
public synchronized void removeReference() {
referenceCount--;
if (referenceCount == 0) {
lockMap.remove(key);
}
}
public void lock() {
lock.lock();
}
public void unlock() {
lock.unlock();
}
}
}
This works for two threads accessing a single key lock, but once a third thread is introduced the lock integrity is no longer guaranteed. Any ideas?
I don't buy that this works for two threads. Consider this:
(Thread A) calls lock(x), now holds lock x
thread switch
(Thread B) calls lock(x), putIfAbsent() returns the current wrapper for x
thread switch
(Thread A) calls unlock(x), the wrapper reference count hits 0 and it gets removed from the map
(Thread A) calls lock(x), putIfAbsent() inserts a new wrapper for x
(Thread A) locks on the new wrapper
thread switch
(Thread B) locks on the old wrapper
How about:
LockWrapper starts with a reference count of 1
addReference() returns false if the reference count is 0
in lock(), if existingWrapper != null, we call addReference() on it. If this returns false, it has already been removed from the map, so we loop back and try again from the putIfAbsent()
I would use a fixed array by default for a striped lock, since you can size it to the concurrency level that you expect. While there may be hash collisions, a good spreader will resolve that. If the locks are used for short critical sections, then you may be creating contention in the ConcurrentHashMap that defeats the optimization.
You're welcome to adapt my implementation, though I only implemented the dynamic version for fun. It didn't seem useful in practice so only the fixed was used in production. You can use the hash() function from ConcurrentHashMap to provide a good spreading.
ReentrantStripedLock in,
http://code.google.com/p/concurrentlinkedhashmap/wiki/IndexableCache