I want to know Advantages to use BlockingQueue instead of (PipedOutputStream and PipedInputStream)
import java.io.*;
import java.util.concurrent.*;
public class PipedStreamVsBlocking {
public static void main(String... args) {
BlockingQueue<Integer> blockingQueue = new LinkedBlockingDeque<>(2);
ExecutorService executor = Executors.newFixedThreadPool(4);
Runnable producerTask = () -> {
try {
while (true) {
int value = ThreadLocalRandom.current().nextInt(0, 1000);
blockingQueue.put(value);
System.out.println("BlockingQueue.Produced " + value);
int timeSleeping = ThreadLocalRandom.current().nextInt(500, 1000);
Thread.sleep(timeSleeping);
}
} catch (InterruptedException e) {
e.printStackTrace();
}
};
Runnable consumerTask = () -> {
try {
while (true) {
int value = blockingQueue.take();
System.out.println("BlockingQueue.Consume " + value);
int timeSleeping = ThreadLocalRandom.current().nextInt(500, 1000);
Thread.sleep(timeSleeping);
}
} catch (InterruptedException e) {
e.printStackTrace();
}
};
PipedOutputStream pipedSrc = new PipedOutputStream();
PipedInputStream pipedSnk = new PipedInputStream();
try {
pipedSnk.connect(pipedSrc);
} catch (IOException e) {
e.printStackTrace();
}
Runnable runnablePut2 = () -> {
try {
ObjectOutputStream oos = new ObjectOutputStream(pipedSrc);
while (true) {
int value = ThreadLocalRandom.current().nextInt(0, 1000);
oos.writeInt(value);
oos.flush();
System.out.println("PipedStream.Produced " + value);
int timeSleeping = ThreadLocalRandom.current().nextInt(500, 1000);
Thread.sleep(timeSleeping);
}
} catch (Exception e) {
e.printStackTrace();
}
};
Runnable runnableGet2 = () -> {
try {
ObjectInputStream ois = new ObjectInputStream(pipedSnk);
while (true) {
int value = ois.readInt();
System.out.println("PipedStream.Consume " + value);
int timeSleeping = ThreadLocalRandom.current().nextInt(500, 1000);
Thread.sleep(timeSleeping);
}
} catch (Exception e) {
e.printStackTrace();
}
};
executor.execute(producerTask);
executor.execute(consumerTask);
executor.execute(runnablePut2);
executor.execute(runnableGet2);
executor.shutdown();
}
}
The output for this code is:
BlockingQueue.Consume 298
BlockingQueue.Produced 298
PipedStream.Produced 510
PipedStream.Consume 510
BlockingQueue.Produced 536
BlockingQueue.Consume 536
PipedStream.Produced 751
PipedStream.Consume 751
PipedStream.Produced 619
BlockingQueue.Produced 584
BlockingQueue.Consume 584
PipedStream.Consume 619
BlockingQueue.Produced 327
PipedStream.Produced 72
BlockingQueue.Consume 327
PipedStream.Consume 72
BlockingQueue.Produced 823
BlockingQueue.Consume 823
PipedStream.Produced 544
PipedStream.Consume 544
BlockingQueue.Produced 352
BlockingQueue.Consume 352
PipedStream.Produced 134
PipedStream.Consume 134
I think that to use PipedStream (PipedOutputStream and PipedInputStream) have advantages, I know when the data is produced/Processed directly.
May be I wrong, And this recommendation to use BlockingQueue instead of Pipe.
But, your comments/recommendations are not found in the documentation.
For this reason, I need to know what I missed.
Why should I use BlockingQueue instead of Piped?
Like any Java Collection, a BlockingQueue stores references to objects, so the thread(s) retrieving objects from it receive precisely the same runtime objects, the producing thread(s) put into it.
In contrast, Serialization stores a persistent form into the byte stream, which only works for Serializable objects and will lead to the creation of copies at the receiving end. In some cases, the objects may get replaced by canonical objects afterwards, still, the entire procedure is significantly more expensive than just transferring references.
In your example case, where you transfer int values, the object identity doesn’t matter, but the overhead of boxing, serializing, deserializing, and unboxing Integer instances is even more questionable.
If you didn’t use Serialization, but transferred int values as four byte quantities directly, using PipedOutputStream and PipedInputStream had a point, as it is a good tool for transferring large quantities of primitive data. It also has an intrinsic support for marking the end of data by closing the pipe.
These pipes would also be the right tool for software that ought to be agnostic regarding processes or even the computers running the producer or consumer, i.e. when you want to be able to use the same software when the pipe is actually between processes or even a network connection. That would also justify using Serialization (as JMX connections do).
But unless you’re truly transferring single bytes that retain their meaning when being torn apart, there’s the intrinsic limitation that only one producer can write into a pipe and only one consumer can read the data.
Related
I am new to learn multi-thread programming. I am told that thread - unsafe problem is always caused by something shared across multi thread. That makes sense for me, however, that seems can not explain the issue in below code which appears nothing is shared across multi thread.
package test;
public class Outputer{
public void output(){
String name = "123456789";
int len = name.length();
for(int i=0;i<len;i++){
System.out.print(name.charAt(i));
}
System.out.println();
}
}
package test;
public class TraditionalThreadSynchronized {
public static void main(String[] args) {
Outputer outputer = new Outputer();
new Thread(new Runnable() {
#Override
public void run() {
for (int i = 0; i <= 50; i++) {
try {
Thread.sleep(10);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
outputer.output();
}
}
}).start();
new Thread(new Runnable() {
#Override
public void run() {
for (int i = 0; i <= 50; i++) {
try {
Thread.sleep(10);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
outputer.output();
}
}
}).start();
}
}
what I expected is that 123456789 should be seen intact. But sometimes, I can see the output in console as below.
... ...
123456789
123456789
123456789
123456789
123456789
123456789 // expected
112323456789 // unexpected
456789 // unexpetecd
123456789
123456789
123456789
123456789
123456789
... ...
I understand the root cause is that when one thread is executing code snippet below, its cpu time segment is over so thread is not able to finish execution. Another thread get cpu time segment then start to execute below code snippet but also possible to not finish the execution. then first thread again get cup time segment then continue to execute from where it was stopped.
In a word, I am aware that the root cause is below code snippet is not Atomic operation.
for(int i=0;i<len;i++){
System.out.print(name.charAt(i));
}
System.out.println();
My fix is to surround with synchronized block as below. Now it reaches my expectation. Looks good.
synchronized(this) {
for(int i=0;i<len;i++){
System.out.print(name.charAt(i));
}
System.out.println();
}
However, I still have some doubts which is currently haunted my mind. Somebody help !!!
Is the statement below true ? Always ?
the thread - unsafe problem is ALWAYS caused by something shared across my multi thread
I am asking because I don't see any data shared across threads in my example. The variable name is local variable, not a pass-in parameter or pass-out parameter (returned parameter). So name is thread safe.
If the statement is true, what is shared by threads ?
If the statement is false, any other situation can caused thread unsafe without sharing data across threads ?
UPDATE: The initial question has been answered as to why the crashes happen but the lingering problem remains of why is the 'OnImageAvailable' callback called so may times? When it is called, I want to do stuff with the image, but whatever method I run at that time is called many times. Is this the wrong place to be using the resulting image?
I am using the sample code found here for a Xamarin Android implementation of the Android Camera2 API. My issue is that when the capture button is pressed a single time, the OnCameraAvalibleListener's OnImageAvailable callback gets called multiple times.
This is causing a problem because the image from AcquireNextImage needs to be closed before another can be used, but close is not called until the Run method of the ImageSaver class as seen below.
This causes these 2 errors:
Unable to acquire a buffer item, very likely client tried to acquire
more than maxImages buffers
AND
maxImages (2) has already been acquired, call #close before acquiring
more.
The max image is set to 2 by default, but setting it to 1 does not help. How do I prevent the callback from being called twice?
public void OnImageAvailable(ImageReader reader)
{
var image = reader.AcquireNextImage();
owner.mBackgroundHandler.Post(new ImageSaver(image, file));
}
// Saves a JPEG {#link Image} into the specified {#link File}.
private class ImageSaver : Java.Lang.Object, IRunnable
{
// The JPEG image
private Image mImage;
// The file we save the image into.
private File mFile;
public ImageSaver(Image image, File file)
{
if (image == null)
throw new System.ArgumentNullException("image");
if (file == null)
throw new System.ArgumentNullException("file");
mImage = image;
mFile = file;
}
public void Run()
{
ByteBuffer buffer = mImage.GetPlanes()[0].Buffer;
byte[] bytes = new byte[buffer.Remaining()];
buffer.Get(bytes);
using (var output = new FileOutputStream(mFile))
{
try
{
output.Write(bytes);
}
catch (IOException e)
{
e.PrintStackTrace();
}
finally
{
mImage.Close();
}
}
}
}
The method OnImageAvailable can be called again as soon as you leave it if there is another picture in the pipeline.
I would recommend calling Close in the same method you are calling AcquireNextImage. So, if you choose to get the image directly from that callback, then you have to call Close in there as well.
One solution involved grabbing the image in that method and close it right away.
public void OnImageAvailable(ImageReader reader)
{
var image = reader.AcquireNextImage();
try
{
ByteBuffer buffer = mImage.GetPlanes()[0].Buffer;
byte[] bytes = new byte[buffer.Remaining()];
buffer.Get(bytes);
// I am not sure where you get the file instance but it is not important.
owner.mBackgroundHandler.Post(new ImageSaver(bytes, file));
}
finally
{
image.Close();
}
}
The ImageSaver would be modified to accept the byte array as first parameter in the constructor:
public ImageSaver(byte[] bytes, File file)
{
if (bytes == null)
throw new System.ArgumentNullException("bytes");
if (file == null)
throw new System.ArgumentNullException("file");
mBytes = bytes;
mFile = file;
}
The major downside of this solution is the risk of putting a lot of pressure on the memory as you basically save the images in memory until they are processed, one after another.
Another solution consists in acquiring the image on the background thread instead.
public void OnImageAvailable(ImageReader reader)
{
// Again, I am not sure where you get the file instance but it is not important.
owner.mBackgroundHandler.Post(new ImageSaver(reader, file));
}
This solution is less intensive on the memory; but you might have to increase the maximum number of images from 2 to something higher depending on your needs. Again, the ImageSaver's constructor needs to be modified to accept an ImageReader as a parameter:
public ImageSaver(ImageReader imageReader, File file)
{
if (imageReader == null)
throw new System.ArgumentNullException("imageReader");
if (file == null)
throw new System.ArgumentNullException("file");
mImageReader = imageReader;
mFile = file;
}
Now the Run method would have the responsibility of acquiring and releasing the Image:
public void Run()
{
Image image = mImageReader.AcquireNextImage();
try
{
ByteBuffer buffer = image.GetPlanes()[0].Buffer;
byte[] bytes = new byte[buffer.Remaining()];
buffer.Get(bytes);
using (var output = new FileOutputStream(mFile))
{
try
{
output.Write(bytes);
}
catch (IOException e)
{
e.PrintStackTrace();
}
}
}
finally
{
image?.Close();
}
}
I too facing this issue for longer time and tried implementing #kzrytof's solution but didn't helped well as expected but found the way to get the onImageAvailable to execute once.,
Scenario: When the image is available then the onImageAvailable method is called right?
so, What I did is after closing the image using image.close(); I called the imagereader.setonImageAvailableListener() and made the listener = null. this way I stopped the execution for second time.,
I know, that your question is for xamarin and my below code is in native android java but the method and functionalities are same, so try once:
#Override
public void onImageAvailable(ImageReader reader) {
final Image image=imageReader.acquireLatestImage();
try {
if (image != null) {
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
int pixelStride = planes[0].getPixelStride();
int rowStride = planes[0].getRowStride();
int rowPadding = rowStride - pixelStride * width;
int bitmapWidth = width + rowPadding / pixelStride;
if (latestBitmap == null ||
latestBitmap.getWidth() != bitmapWidth ||
latestBitmap.getHeight() != height) {
if (latestBitmap != null) {
latestBitmap.recycle();
}
}
latestBitmap.copyPixelsFromBuffer(buffer);
}
}
catch(Exception e){
}
finally{
image.close();
imageReader.setOnImageAvailableListener(null, svc.getHandler());
}
// next steps to save the image
}
version:rocketmq-all-4.1.0-incubating
We send msg 1000 QPS,sync send, but throw exception:-
[TIMEOUT_CLEAN_QUEUE] broker busy, start flow control for a while
There is the related code:
while (true) {
try {
if (!this.brokerController.getSendThreadPoolQueue().isEmpty()) {
final Runnable runnable = this.brokerController.getSendThreadPoolQueue().peek();
if (null == runnable) {
break;
}
final RequestTask rt = castRunnable(runnable);
if (rt == null || rt.isStopRun()) {
break;
}
final long behind = System.currentTimeMillis() - rt.getCreateTimestamp();
if (behind >= this.brokerController.getBrokerConfig().getWaitTimeMillsInSendQueue()) {
if (this.brokerController.getSendThreadPoolQueue().remove(runnable)) {
rt.setStopRun(true);
rt.returnResponse(RemotingSysResponseCode.SYSTEM_BUSY, String.format("[TIMEOUT_CLEAN_QUEUE]broker busy, start flow control for a while, period in queue: %sms, size of queue: %d", behind, this.brokerController.getSendThreadPoolQueue().size()));
}
} else {
break;
}
} else {
break;
}
} catch (Throwable ignored) {
}
}
}
I find broker the default value of sendMessageThreadPoolNums is 1,
/**
* thread numbers for send message thread pool, since spin lock will be used by default since 4.0.x, the default value is 1.
*/
private int sendMessageThreadPoolNums = 1; //16 + Runtime.getRuntime().availableProcessors() * 4;
private int pullMessageThreadPoolNums = 16 + Runtime.getRuntime().availableProcessors() * 2;
but the previous version isn't 1, and if I configure sendMessageThreadPoolNums = 100, can resolve this question ? It will lead to what is different with default value.
thanks
SHORT ANSWER:
you have two choices:
set sendMessageThreadPoolNums to a small number, say 1, which is the default value after version 4.1.x. And, remain the default value of useReentrantLockWhenPutMessage=false, which is introduced after 4.1.x
sendMessageThreadPoolNums=1
useReentrantLockWhenPutMessage=false
If you need to use a large numbers of threads to process sending message, you'd better use useReentrantLockWhenPutMessage=true
sendMessageThreadPoolNums=128//large thread numbers
useReentrantLockWhenPutMessage=true // indicating that do NOT use spin lock but use ReentrantLock when putting message
I'm new to RxJava. Currently, I'm trying out samples and converting existing codes to Rx.
I have an existing API, which takes a large list of objects,
since server takes time to process a large number of inputs and also due to timeout issues I'm sending inputs to the API in batch wise. if I have 300 objects, I will pass it as batches of 10 objects. As in, first I call the API with first 10 items, waiting for the response, once I received the response, I will take the next 10, till I reach 300 items. Righ now I am using so many nested callbacks and flags to keep track of items and results. I need to convert it to Rx Java.
I tried something with buffer operator and its working as expected. Just wanted to know is there any better solution or is this the exact way to do. My code is given below.
Observable.range(1, 300)
.buffer(10)
.flatMap((integers) -> mockServerResult(integers))
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.subscribe(new DisposableObserver<String>() {
#Override
public void onNext(#NonNull String s) {
Log.d(TAG, "onNext: " + s);
}
#Override
public void onError(#NonNull Throwable e) {
}
#Override
public void onComplete() {
}
});
}
public Observable<String> mockServerResult(List<Integer> integers) {
StringBuilder stringBuilder = new StringBuilder("server Results for ");
for (Integer integer : integers) {
stringBuilder.append(integer.toString()).append(",");
}
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
return Observable.just(stringBuilder.toString());
}
I'm trying to implement chunk response in webapp using PLay 2 with Akka. However, instead of load the response by chunk by chunk all the response is coming as once. Below is the code by which I'm creating chunk in the controller:
/**
*
*/
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.CompletionStage;
import com.google.inject.Inject;
import com.google.inject.Singleton;
import akka.stream.javadsl.Source;
import akka.util.ByteString;
import org.pmw.tinylog.Logger;
import play.cache.CacheApi;
import play.cache.Cached;
import play.filters.csrf.AddCSRFToken;
import play.filters.csrf.CSRF;
import play.libs.Json;
import play.libs.concurrent.HttpExecutionContext;
import play.mvc.Controller;
import play.mvc.Http;
import play.mvc.Http.Cookie;
import play.mvc.Result;
import akka.NotUsed;
import akka.actor.Status;
import akka.stream.OverflowStrategy;
import akka.stream.javadsl.Source;
import akka.util.ByteString;
/**
* #author Abhinabyte
*
*/
#Singleton
#AddCSRFToken
public class GetHandler extends Controller {
#Inject
private CacheApi cache;
#Inject
private HttpExecutionContext httpExecutionContext;
public CompletionStage<Result> index() {
return CompletableFuture.supplyAsync( () ->
Source.<ByteString>actorRef(256, OverflowStrategy.dropNew())
.mapMaterializedValue(sourceActor -> {
CompletableFuture.runAsync(() -> {
sourceActor.tell(ByteString.fromString("1"), null);
sourceActor.tell(ByteString.fromString("2"), null);
sourceActor.tell(ByteString.fromString("3"), null);
try {
Thread.sleep(3000);//intentional delay
} catch (InterruptedException e) {
e.printStackTrace();
}
sourceActor.tell(ByteString.fromString("444444444444444444444444444444444444444444444444444444444444444444444444"), null);
sourceActor.tell(new Status.Success(NotUsed.getInstance()), null);
});
return sourceActor;
})
).thenApplyAsync( chunks -> ok().chunked(chunks).as("text/html"));
}
}
And below is the Akka thread pool configuration at application.conf :
akka {
jvm-exit-on-fatal-error = on
actor {
default-dispatcher {
fork-join-executor {
parallelism-factor = 1.0
parallelism-max = 64
task-peeking-mode = LIFO
}
}
}
}
play.server.netty {
eventLoopThreads = 0
maxInitialLineLength = 4096
log.wire = false
transport = "native"
}
As you can see before sending last to last chunk I'm intentionally delaying the response time. So logically, all chunked data before it should be delivered before it.
However, in my case whole bunch of data is getting loaded. I've tested in all browser(even have tried to CURL).
What I'm missing in here?
Blocking in mapMaterializedValue will do that because it runs in the Akka default-dispatcher thread, thus preventing message routing for the duration (see this answer for details). You want to dispatch your slow, blocking code asynchronously, with the actor reference for it to post messages to. Your example will do what you expect if you run it in a future:
public CompletionStage<Result> test() {
return CompletableFuture.supplyAsync( () ->
Source.<ByteString>actorRef(256, OverflowStrategy.dropNew())
.mapMaterializedValue(sourceActor -> {
CompletableFuture.runAsync(() -> {
for (int i = 0; i < 20; i++) {
sourceActor.tell(ByteString.fromString(String.valueOf(i) + "<br/>\n"), null);
try {
Thread.sleep(500);//intentional delay
} catch (InterruptedException e) {
e.printStackTrace();
}
}
sourceActor.tell(new Status.Success(NotUsed.getInstance()), null);
});
return sourceActor;
})
).thenApplyAsync( chunks -> ok().chunked(chunks).as("text/html"));
}
If you check the Source code, you can see that the first parameter is bufferSize
public static <T> Source<T,ActorRef> actorRef(int bufferSize,
OverflowStrategy overflowStrategy)
all your elements that you generate in the stream probably have less then 256 bytes, hence only one http chunk is generated. Try to add more elements like in #Mikesname example.
It might me useful, if you need chunked response by using other approach.
public Result test() {
try {
// Finite list
List<String> sourceList = Arrays.asList("kiki", "foo", "bar");
Source<String, ?> source = Source.from(sourceList);
/* Following DB call, which fetch a record at a time, and send it as chunked response.
final Iterator<String> sourceIterator = Person.fetchAll();
Source<String, ?> source = Source.from(() -> sourceIterator); */
return ok().chunked(source.via(Flow.of(String.class).map(ByteString::fromString))).as(Http.MimeTypes.TEXT);
} catch (Exception e) {
return badRequest(e.getMessage());
}
}