How interact between PublishSubject and PublishProcessor to simulate deliveries?
I have this problem:
I have a list of Orders
Receive the orders filter by temp hot, cook the order,
Place the order on the hot shelf, after that a courier will deliver the orders every 3 seconds.
If the shelf is full discard the orders in the shelf.
Hot shelf -> Size: 10
I tried use PublishSubject to send to a PublishProcessor that will be the hot shelf
but that does not work?
Any idea about how to achieve that?
I do not have to much experience with reactive programming.
import io.reactivex.rxjava3.core.BackpressureOverflowStrategy;
import io.reactivex.rxjava3.processors.PublishProcessor;
import io.reactivex.rxjava3.schedulers.Schedulers;
import io.reactivex.rxjava3.subjects.PublishSubject;
import java.util.List;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
public class App{
public static void main( String[] args ){
ReadJson read = new ReadJson();
List<Order> list = read.cargarArchivoJson("orders.json");
PublishSubject<Order> subjectOrder = PublishSubject.create();
PublishProcessor<Order> hotShelf = PublishProcessor.create();
AtomicInteger hotSize = new AtomicInteger(0);
subjectOrder.observeOn(Schedulers.computation())
.filter(r -> r.getTemp().equals("hot"))
.subscribe(
s -> {
hotSize.set(hotSize.get() + 1);
System.out.println("Sent to hot Shelf, Size: " + hotSize.get());
hotShelf.onNext(s);
},
Throwable::printStackTrace,
() -> System.out.println("--- Processed all Hot Temperature Orders")
);
hotShelf.onBackpressureBuffer(10, () ->{ System.out.println("dropping the oldest .. "); }, BackpressureOverflowStrategy.DROP_OLDEST)
.observeOn(Schedulers.computation())
.subscribe(s -> {
System.out.println("Waiting for the courier..");
TimeUnit.SECONDS.sleep(3);
hotSize.set(hotSize.get() - 1);
System.out.println("Hot Order delivered, Size on hot Shelf: " + hotSize.get());
}, Throwable::printStackTrace);
for(int i = 0; i < list.size(); i++){
subjectOrder.onNext(list.get(i));
}
}
}
the output is:
Sent to hot Shelf, Size: 1
Sent to hot Shelf, Size: 2
Sent to hot Shelf, Size: 3
Sent to hot Shelf, Size: 4
Sent to hot Shelf, Size: 5
Sent to hot Shelf, Size: 6
Waiting for the courier..
Sent to hot Shelf, Size: 7
Sent to hot Shelf, Size: 8
Sent to hot Shelf, Size: 9
..
It finished when the loop to iterate the list ends and it didn't deliver a single order.
Udpate:
Now is working but the problem it does not discard after 10, it does after a large amount, how would be the best way to discard a Order because using a atomic inteteger is not the best way of do this also beacuse I want to know the id of the discarded order.
Is there a way to end the program once the hotShelf ends? to avoid a while loop or a Thread.sleep().
public static void main( String[] args ){
ReadJson read = new ReadJson();
List<Order> list = read.cargarArchivoJson("orders.json");
Subject<Order> subjectOrder = (Order)PublishSubject.create().toSerialized();
PublishProcessor<Order> hotShelf = PublishProcessor.create();
AtomicInteger hotSize = new AtomicInteger(0);
subjectOrder.observeOn(Schedulers.computation())
.filter(r -> r.getTemp().equals("hot"))
.subscribe(
s -> {
hotSize.set(hotSize.get() + 1);
System.out.println("Sent to hot Shelf, Size: " + hotSize.get());
hotShelf.onNext(s);
},
Throwable::printStackTrace,
() -> System.out.println("--- Processed all Hot Temperature Orders")
);
hotShelf.onBackpressureBuffer(10, () ->{ System.out.println("dropping the oldest .. "); hotSize.set(hotSize.get() -1); }, BackpressureOverflowStrategy.DROP_OLDEST)
.observeOn(Schedulers.computation())
.subscribe(s -> {
System.out.println("Waiting for the courier..");
TimeUnit.SECONDS.sleep(3);
hotSize.set(hotSize.get() - 1);
System.out.println("Hot Order delivered, Size on hot Shelf: " + hotSize.get());
}, Throwable::printStackTrace);
for(int i = 0; i < list.size(); i++){
subjectOrder.onNext(list.get(i));
}
subjectOrder.onComplete();
boolean flag;
while(!hotShelf.hasComplete()){
flag = false; // just for test to avoid an error on a empty while
}
}
Related
I needed to call two downstream systems parallelly with non-blocking io from Spring flux-based my rest service API. But the first downstream system capacity is 10 requests at a time and the second downstream system is 100.
The first downstream system out is input to the second downstream system so I can make a more parallel request to the second system to expedite the process.
The second downstream system response is very large so unable to hold in memory to concrete all the response So immediate want to return the response to the client.
Ex workflow:
Sample Code:
#GetMapping(path = "/stream", produces = MediaType.APPLICATION_STREAM_JSON_VALUE)
public Flux<String> getstream() {
ExecutorService executor = Executors.newFixedThreadPool(10);
List<CompletableFuture> list = new ArrayList<>();
AtomicInteger ai = new AtomicInteger(1);
RestTemplate restTemplate = new RestTemplate();
for (int i = 0; i < 100; i++) {
CompletableFuture<Object> cff = CompletableFuture.supplyAsync(
() -> ai.getAndAdd(1) + " first downstream web service " +
restTemplate.getForObject("http://dummy.restapiexample.com/api/v1/employee/" + ai.get(), String.class)
).thenApplyAsync(v -> {
Random r = new Random();
Integer in = r.nextInt(1000);
return v + " second downstream web service " + in + " " + restTemplate.getForObject("http://dummy.restapiexample.com/api/v1/employee/" + ai.get() + 1, String.class) + " \n";
}, executor);
list.add(cff);
}
return Flux.fromStream(list.stream().map(m -> {
try {
return m.get().toString();
} catch (InterruptedException | ExecutionException e) {
e.printStackTrace();
}
return "";
})
);
}
This code only working for the first five threads after I am getting a response all threads completed the process. But I needed to get a response immediately to the client once I am getting the response from the second downstream system.
Note: The above code is not implemented with a second level thread pool.
Thank you in advance.
If you're building non-blocking system using Spring-Webflux it's better to utilise capabilities of WebClient in your example. I've created a simple test application where the below code snippet worked for me:
private final WebClient w = WebClient.create("http://localhost:8080/call"); // web client for external system
#GetMapping(path = "/stream", produces = MediaType.APPLICATION_STREAM_JSON_VALUE)
public Flux<MyClass> getstream() {
return Flux
.range(0, 100) // prepare initial 100 requests
.window(10) // combine elements in batch of 10 (probably buffer will fit better, have a look)
// .delayElements(Duration.ofSeconds(5)) for testing purpose you can use this function as well
.doOnNext(flow -> log.info("Batch of 10 is ready")) // double check tells that batch is ready
.flatMap(flow -> flow
// perform an external async call for each element in batch of 10
// they will be executed sequentially but there will not be any performance issues because
// calls are async. If you wish you can add .parallel() to the flow to make it parallel
.flatMap(element -> w.get().exchange())
.map(r -> r.bodyToMono(MyClass.class))
)
// subscribe to each response and throw received element further to the stream
.flatMap(response -> Mono.create(s -> response.subscribe(s::success)))
.window(1000) // batch of 1000 is ready
.flatMap(flow -> flow
.flatMap(element -> w.get().exchange())
.map(r -> r.bodyToMono(MyClass.class))
)
.flatMap(response -> Mono.create(s -> response.subscribe(s::success)));
}
public static class MyClass {
public Integer i;
}
UPDATE:
I've prepared a small application to reproduce your case. You can find it in my repository.
Halo,
I'm unable to get any improved performance with TPL DataFlow and wondering if I'm using it incorrectly.
The application below does the following:
Pulls message from a Kafka topic
Parses this message into an Foo object with ParseData()
Serializes this Foo into JSON
Then publishes the JSON to a new Kafka topic.
Some single threaded stats:
ParseData can parse strings into Foo at 100 msg/sec (single threaded test)
SerializeMessage can do 200 Foos/sec (single threaded test)
Consuming Kafka messages (skipping all the parsing/serializing) can handle over 2000 msgs/sec
Based on this, I had hopes to leverage TPL for improving throughput. My max throughput should be close to the Kafka limit of 2000 msgs/sec.
However, I'm not seeing any improvements in throughput and I'm running the application on a machine with 12 physical cores (24 w HT). When I print out the size of the queue for each block, the transformBlock is always around 1000, but the others are under 10 which leads me to believe that the transformBlock isn't leveraging multi-core system.
Have I setup TPL DataFlow to leverage paralellism correctly?
app = new App();
await app.Start(new[]{"consume-topic"}, cancelSource);
// App class
async Task Start(IEnumerable<string> topics, CancellationTokenSource cancelSource) {
transformBlock = new TransformBlock<string, Foo>(TransformKafkaMessage,
new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = 8,
BoundedCapacity = 1000,
SingleProducerConstrained = true,
});
serializeBlock = new TransformBlock<Foo, string>(SerializeMessage,
new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = 4,
BoundedCapacity = 1000,
SingleProducerConstrained = true,
});
publishBlock = new ActionBlock<JsonMessage>(PublishJson,
new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = 1,
BoundedCapacity = 1000,
SingleProducerConstrained = true
});
// Setup the pipeline
transformBlock.LinkTo(serializeBlock);
serializeBlock.LinkTo(publishBlock);
// Start Kafka Listener loop
consumer.Subscribe(topics);
while(true) {
var result = consumer.Consume(cancelSource.Token);
await ProcessMessage<Ignore, string>(result);
}
}
// send the content of the kafka message to transform block
async Task ProcessMessage<TKey, TValue>(ConsumeResult<TKey, string> msg) {
var result = await transformBlock.SendAsync(msg.Value);
}
// Convert the raw string data into an object
Foo TransformKafkaMessage(string data) {
// Note this ParseData() function can process about 100 items per sec
// in local single threaded testing
Foo foo = ParseData(data);
return foo;
}
// Serialize the new Foo into JSON
string SerializeMessage(Foo foo) {
// The serializer can process about 200 msgs/sec (single threaded test)
var json = foo.Serialize();
return json;
}
// publish new message back to Kafka
void PublishJson(string json) {
// Create a Confluent.Kafka Message
var kafkaMessage = new Message<Null, string> {
Value = json
};
producer.Produce("produce-topic", kafkaMessage);
}
version:rocketmq-all-4.1.0-incubating
We send msg 1000 QPS,sync send, but throw exception:-
[TIMEOUT_CLEAN_QUEUE] broker busy, start flow control for a while
There is the related code:
while (true) {
try {
if (!this.brokerController.getSendThreadPoolQueue().isEmpty()) {
final Runnable runnable = this.brokerController.getSendThreadPoolQueue().peek();
if (null == runnable) {
break;
}
final RequestTask rt = castRunnable(runnable);
if (rt == null || rt.isStopRun()) {
break;
}
final long behind = System.currentTimeMillis() - rt.getCreateTimestamp();
if (behind >= this.brokerController.getBrokerConfig().getWaitTimeMillsInSendQueue()) {
if (this.brokerController.getSendThreadPoolQueue().remove(runnable)) {
rt.setStopRun(true);
rt.returnResponse(RemotingSysResponseCode.SYSTEM_BUSY, String.format("[TIMEOUT_CLEAN_QUEUE]broker busy, start flow control for a while, period in queue: %sms, size of queue: %d", behind, this.brokerController.getSendThreadPoolQueue().size()));
}
} else {
break;
}
} else {
break;
}
} catch (Throwable ignored) {
}
}
}
I find broker the default value of sendMessageThreadPoolNums is 1,
/**
* thread numbers for send message thread pool, since spin lock will be used by default since 4.0.x, the default value is 1.
*/
private int sendMessageThreadPoolNums = 1; //16 + Runtime.getRuntime().availableProcessors() * 4;
private int pullMessageThreadPoolNums = 16 + Runtime.getRuntime().availableProcessors() * 2;
but the previous version isn't 1, and if I configure sendMessageThreadPoolNums = 100, can resolve this question ? It will lead to what is different with default value.
thanks
SHORT ANSWER:
you have two choices:
set sendMessageThreadPoolNums to a small number, say 1, which is the default value after version 4.1.x. And, remain the default value of useReentrantLockWhenPutMessage=false, which is introduced after 4.1.x
sendMessageThreadPoolNums=1
useReentrantLockWhenPutMessage=false
If you need to use a large numbers of threads to process sending message, you'd better use useReentrantLockWhenPutMessage=true
sendMessageThreadPoolNums=128//large thread numbers
useReentrantLockWhenPutMessage=true // indicating that do NOT use spin lock but use ReentrantLock when putting message
I know this might already have been answered, but all the places where i found it, it wouldn't work properly. I'm making a game in Greenfoot and I'm having an issue. So I'm generating a random number every time a counter reaches 600, and then testing if that randomly generated number is equal to 1, and if it is, it creates an object. For some reason, the object will be created every time the counter reaches 600. I'm somewhat new to Java so it's probably something simple.
import greenfoot.*;
import java.util.Random;
/**
* Write a description of class Level_One here.
*
* #CuddlySpartan
*/
public class Level_One extends World
{
Counter counter = new Counter();
/**
* Constructor for objects of class Level_One.
*
*/
public Level_One()
{
super(750, 750, 1);
prepare();
}
public Counter getCounter()
{
return counter;
}
private void prepare()
{
addObject(counter, 150, 40);
Ninad ninad = new Ninad();
addObject(ninad, getWidth()/2, getHeight()/2);
Fail fail = new Fail();
addObject(fail, Greenfoot.getRandomNumber(getWidth()), Greenfoot.getRandomNumber(getHeight()));
}
private int spawnCounter = 0;
private int invincibleCounter = 0;
Random random = new Random();
private int randomNumber;
public void act()
{
controls();
{if (spawnCounter > 500) {
spawnCounter = 0;
addObject(new Fail(), Greenfoot.getRandomNumber(getWidth()), Greenfoot.getRandomNumber(getHeight()));
}
spawnCounter++;
{if (spawnCounterTwo > 300) {
spawnCounterTwo = 0;
addObject(new APlus(), Greenfoot.getRandomNumber(getWidth()), Greenfoot.getRandomNumber(getHeight()));
}
spawnCounterTwo++;
}
if (invincibleCounter > 600)
{
int randomNumber = random.nextInt(10);
if (randomNumber == 1)
{
Invincible invincible = new Invincible();
addObject(invincible, Greenfoot.getRandomNumber(getWidth()), Greenfoot.getRandomNumber(getHeight()));
invincibleCounter = 0;
}
if (randomNumber == 2)
{
Storm storm = new Storm();
addObject(storm, Greenfoot.getRandomNumber(getWidth()), Greenfoot.getRandomNumber(getHeight()));
}
else
{
}
}
invincibleCounter ++;
}
}
private int spawnCounterTwo = 100;
public void controls()
{
if (Greenfoot.isKeyDown("escape"))
{
Greenfoot.stop();
}
}
}
I'm not getting errors as it is compiling fine, but when i run it i have issues. Any help? Thanks in advance!
This is only speculation, since I cannot see the rest of your code, but I suspect that you are seeding your random number generator with some constant number. So every time you run your program, the random number generator generates numbers in the same order. In order to confirm this, please show some more code.
Also, your brackets do not match, so at least please show enough code to have matching curly braces.
Are you sure it is created exactly when the counter hits 600? You're incrementing the counter every frame, and at the default ~30 fps speed, that's twenty seconds. Then every frame after that, you're getting a random integer and have a 10% chance to make an Invincible. But 10% chance will on average come up within ten frames, which is 1/3 of a second. Then the counter will reset and you'll wait twenty more seconds, then create an Invincible within the next second, and so on. If you want a 10% chance every 20 seconds, you need to reset the Counter in the else branch, as well as the "then" branch (or just reset it just inside your very first if).
the code below is for a fundraiser dinner to purchase a land, the purpose is to show the progress of the square meter of land purchased (around 2976m2). everytime a square meter is purchased, the application adds an image tile which corresponds to an acctual 1m2. eventually the tiles (~2976 of them) fill up like in a grid to complete the land once fully purchased.
The size of each tiles is around 320bytes, there are 2976 tiles in total.
I have also showing below an image example.
The thing that drives me crazy with this code (in javafx) is that it consumes around 90 to 100% of 1 of my processors and the memory usage keeps increasing as the tiles add up until the code buffer run out of memory and the program crashes after a while. this is not desirable during the fundraising dinner.
the full code is available for testing at
you will need to change boolean split to true false, which will split the images for you, (around 3000 images);
https://github.com/rihani/Condel-Park-Fundraiser/tree/master/src/javafxapplication3
The main culprit that uses all the memory and CPU is the AnimationTimer() function shown below and I am wondering if anyone can help me reduce memory and CPU usage in this code.
to briefly explain how the code below is used, the land is divided into 2 panes, when the first one grid_pane1 is filled up the second pane grid_pane2 starts to then fill up.
also a flashing tile is used to show the current progress.
I am using total_donnation ++; to test the code, but would normally use mysql to pull the new value raised during the findraising dinner
AnimationTimer() Code:
translate_timer = new AnimationTimer() {
#Override public void handle(long now) {
if (now > translate_lastTimerCall + 10000_000_000l)
{
old_total_donnation = total_donnation;
try
{
// c = DBConnect.connect();
// SQL = "Select * from donations";
// rs = c.createStatement().executeQuery(SQL);
// while (rs.next())
// {total_donnation = rs.getInt("total_donnation");}
// c.close();
total_donnation ++;
if(total_donnation != old_total_donnation)
{
System.out.format("Total Donation: %s \n", total_donnation);
old_total_donnation = total_donnation;
if (!pane1_full)
{
grid_pane1.getChildren().clear();
grid_pane1.getChildren().removeAll(imageview_tile1,hBox_outter_last);
}
grid_pane2.getChildren().clear();
grid_pane2.getChildren().removeAll(imageview_tile2,hBox_outter_last);
for(i=0; i<=total_donnation; i++)
{
if (pane1_full){ System.out.println("Pane 1 has not been redrawn"); break;}
file1 = new File("pane1_img"+i+".png");
pane1_tiled_image = new Image(file1.toURI().toString(),image_Width,image_Height,false,false);
imageview_tile1 = new ImageView(pane1_tiled_image);
grid_pane1.add(imageview_tile1, current_column_pane1,current_row_pane1);
current_column_pane1 = current_column_pane1+1;
if (current_column_pane1 == max_columns_pane1 )
{
current_row_pane1 = current_row_pane1+1;
current_column_pane1 = 0;
}
if (i == max_donnation_pane1 ){ pane1_full = true; System.out.println("Pane 1 full"); break;}
if (i == total_donnation)
{
if (i != max_donnation_pane1)
{
hBox_outter_last = new HBox();
hBox_outter_last.setStyle(style_outter);
hBox_outter_last.getChildren().add(blink_image);
ft1 = new FadeTransition(Duration.millis(500), hBox_outter_last);
ft1.setFromValue(1.0);
ft1.setToValue(0.3);
ft1.setCycleCount(Animation.INDEFINITE);
ft1.setAutoReverse(true);
ft1.play();
grid_pane1.add(hBox_outter_last, current_column_pane1,current_row_pane1);
}
}
}
if (i < total_donnation)
{
total_donnation_left = total_donnation - max_donnation_pane1;
for(j=0; j<=total_donnation_left; j++)
{
file2 = new File("pane2_img"+j+".png");
pane2_tiled_image = new Image(file2.toURI().toString(),image_Width,image_Height,false,false);
imageview_tile2 = new ImageView(pane2_tiled_image);
grid_pane2.add(imageview_tile2, current_column_pane2,current_row_pane2);
current_column_pane2 = current_column_pane2+1;
if (current_column_pane2 == max_columns_pane2 )
{
current_row_pane2 = current_row_pane2+1;
current_column_pane2 = 0;
}
if (j == max_donnation_pane2 ){ System.out.println("Pane 2 full"); break;}
if (j == total_donnation_left)
{
if (j != max_donnation_pane2)
{
hBox_outter_last = new HBox();
hBox_outter_last.setStyle(style_outter);
hBox_outter_last.getChildren().add(blink_image);
ft = new FadeTransition(Duration.millis(500), hBox_outter_last);
ft.setFromValue(1.0);
ft.setToValue(0.3);
ft.setCycleCount(Animation.INDEFINITE);
ft.setAutoReverse(true);
ft.play();
grid_pane2.add(hBox_outter_last, current_column_pane2,current_row_pane2);
}
}
}
}
current_column_pane1 =0;
current_row_pane1=0;
current_column_pane2=0;
current_row_pane2=0;
}
}
catch (Exception ex) {}
translate_lastTimerCall = now;
}
}
};
First and foremost, you create a lot of indefinite FadeTransitions that are never stopped. These add up over time and cause both memory and CPU leaks. You should stop() the transition before starting a new one. Alternatively, you only need one transition to interpolate the value of a DoubleProperty and then bind node's opacity to this property:
DoubleProperty opacity = new SimpleDoubleProperty();
Transition opacityTransition = new Transition() {
protected void interpolate(double frac) {
opacity.set(frac);
}
};
// elsewhere
hBox_outter_last.opacityProperty().bind(opacity);
You may want to preload all the image tiles beforehand, so that you avoid reading from disk in the loop.
You unnecessarily destroy and recreate large part of the scene in every cycle. You should modify your code to only add the new tiles and not drop them all and recreate them from scratch.
Finally, when you actually query the database, you should do it from a different thread and not the JavaFX application thread, because your UI will be unresponsive for the time of the query (e.g. not animating your fade transitions).
I have a suggestion:
Do not split the image instead using 2 panels. One for displaying the whole image. The second will be a grid pane overlapping the first pane. Therefore, when a square meter is purchased, the background of corresponding grid-cell will become transparent.