forkjoinpool in Java8 stream api is singleton? - java-8

List<Integer> list = Arrays.asList(3, 4, 6, 1, 2, 3, 5, 6);
list.parallelStream().forEach(System.out::print);
list.parallelStream().map(a -> a + 2).forEach(a -> System.out.println(Thread.currentThread().getName() + "--" + a));
two and three line will use a same forkjoinpool? if not, how to let them use one?

All parallels streams run in the singleton pool returned by ForkJoinPool.commonPool(). The exception is if you run a parallel stream from inside a ForkJoinPool, the stream will run within the pool from which is it invoked.
These are implementation details that aren't specified anywhere in the Streams API documentation, so java devs would be perfectly within their right to pick some other pool or eschew fork/join framework entirely and do something else.
If it is crucial for you to know which threads end up executing your code, streams API might not be your best choice.

At the moment - yes, there is a single pool for all parallel streams by default - it's ForkJoinPool#commonPool; thus your observation is right.
There is a way to set-up a custom pool for each execution of a Stream pipeline, see here.
It would look like this for your second example:
List<Integer> list = Arrays.asList(3, 4, 6, 1, 2, 3, 5, 6);
ForkJoinPool forkJoinPool = new ForkJoinPool(4);
ForkJoinTask<?> task = forkJoinPool.submit(() -> list
.parallelStream()
.map(a -> a + 2)
.forEach(a -> System.out.println(Thread.currentThread().getName() + "--" + a)));
task.get();

Related

Gatling - Dynamic Scenario, InjectionProfile, Assertion creation based on Configuration

I am attempting to write a simulation that can read from a config file for a set of apis that each have a set of properties.
I read the config for n active scenarios and create requests from a CommonRequest class
Then those requests are built into scenarios from a CommonScenario
CommonScenarios have attributes that are using to create their injection profiles
That all seems to work no issue. But when I try to use the properties / CommonScenario request to build a set of Assertions it does not work as expected.
// get active scenarios from the config
val activeApiScenarios: List[String] = Utils.getStringListProperty("my.active_scenarios")
// build all active scenarios from config
var activeScenarios: Set[CommonScenario] = Set[CommonScenario]()
activeApiScenarios.foreach { scenario =>
activeScenarios += CommonScenarioBuilder()
.withRequestName(Utils.getProperty("my." + scenario + ".request_name"))
.withRegion(Utils.getProperty("my." + scenario + ".region"))
.withConstQps(Utils.getDoubleProperty("my." + scenario + ".const_qps"))
.withStartQps(Utils.getDoubleListProperty("my." + scenario + ".toth_qps").head)
.withPeakQps(Utils.getDoubleListProperty("my." + scenario + ".toth_qps")(1))
.withEndQps(Utils.getDoubleListProperty("my." + scenario + ".toth_qps")(2))
.withFeeder(Utils.getProperty("my." + scenario + ".feeder"))
.withAssertionP99(Utils.getDoubleProperty("my." + scenario + ".p99_lte_assertion"))
.build
}
// build population builder set by adding inject profile values to scenarios
var injectScenarios: Set[PopulationBuilder] = Set[PopulationBuilder]()
var assertions : Set[Assertion] = Set[Assertion]()
activeScenarios.foreach { scenario =>
// create injection profiles from CommonScenarios
injectScenarios += scenario.getCommonScenarioBuilder
.inject(nothingFor(5 seconds),
rampUsersPerSec(scenario.startQps).to(scenario.rampUpQps).during(rampOne seconds),
rampUsersPerSec(scenario.rampUpQps).to(scenario.peakQps).during(rampTwo seconds),
rampUsersPerSec(scenario.peakQps).to(scenario.rampDownQps) during (rampTwo seconds),
rampUsersPerSec(scenario.rampDownQps).to(scenario.endQps).during(rampOne seconds)).protocols(httpProtocol)
// create scenario assertions this does not work for some reason
assertions += Assertion(Details(List(scenario.requestName)), TimeTarget(ResponseTime, Percentiles(4)), Lte(scenario.assertionP99))
}
setUp(injectScenarios.toList)
.assertions(assertions)
Note scenario.requestName is straight from the build scenario
.feed(feederBuilder)
.exec(commonRequest)
I would expect the Assertions get built from their scenarios into an iterable and pass into setUp().
What I get:
When I print out everything the scenarios, injects all look good but then I print my "assertions" and get 4 assertions for the same scenario name with 4 different Lte() values. This is generalized but I configured 12 apis all with different names and Lte() values, etc.
Details(List(Request Name)) - TimeTarget(ResponseTime,Percentiles(4.0)) - Lte(500.0)
Details(List(Request Name)) - TimeTarget(ResponseTime,Percentiles(4.0)) - Lte(1500.0)
Details(List(Request Name)) - TimeTarget(ResponseTime,Percentiles(4.0)) - Lte(1000.0)
Details(List(Request Name)) - TimeTarget(ResponseTime,Percentiles(4.0)) - Lte(2000.0)
After the simulation the assertions all run like normal:
Request Name: 4th percentile of response time is less than or equal to 500.0 : false
Request Name: 4th percentile of response time is less than or equal to 1500.0 : false
Request Name: 4th percentile of response time is less than or equal to 1000.0 : false
Request Name: 4th percentile of response time is less than or equal to 2000.0 : false
Not sure what I am doing wrong when building my assertions. Is this even a valid approach? I wanted to ask for help before I abandon this for a different approach.
Disclaimer: Gatling creator here.
It should work.
Then, there are several things I'm super not found of.
assertions += Assertion(Details(List(scenario.requestName)), TimeTarget(ResponseTime, Percentiles(4)), Lte(scenario.assertionP99))
You shouldn't be using the internal AST here. You should use the DSL like you've done for the injection profile.
var assertions : Set[Assertion] = SetAssertion
activeScenarios.foreach { scenario =>
You should use map on activeScenarios (similar to Java's Stream API), not use a mutable accumulator.
val activeScenarios = activeApiScenarios.map(???)
val injectScenarios = activeScenarios.map(???)
val assertions = activeScenarios.map(???)
Also, as you seem to not be familiar with Scala, you should maybe switch to Java (supported for more than 1 year).

Reactor changes in Spring Boot 2 M4

I have updated from Spring Boot 2.0.0.M3 to 2.0.0.M4, which updates Reactor from 3.1.0.M3 to 3.1.0.RC1. This causes my code to break in a number of places.
Mono.and() now returns Mono<Void>, where previously it returned Mono<Tuple>
This is also the case for Mono.when()
The following code compiles with the older versions, but not with the new version
Mono<String> m1 = Mono.just("A");
Mono<String> m2 = Mono.just("B");
Mono<String> andResult = m1.and(m2).map(t -> t.getT1() + t.getT2());
Mono<String> whenResult = Mono.when(m1, m2).map(t -> t.getT1() + t.getT2());
Has there been any changes to how this should work?
when and and that produce Tuple have been replaced with zip/zipWith which are their exact equivalent in the Flux API, in order to align the APIs. Remaining when and and methods, which are found only in Mono, are now purely about combining the completion signals, discarding the onNexts (hence they return a Mono<Void>)
I switched to Mono.zip(...):
mono1.and(mono2).map(...)
=>
Mono.zip(mono1, mono2).map(...)

how to scale down instances based on their uptime with apache marathon?

I find myself in a situation where I have the necessity to scale down container instances based on their actual lifetime. It looks like fresh instances are removed first when scaling down through marathon's API. Is there any configuration I'm not aware of to implement this kind of strategy or policy when scaling down instances on apache marathon?
As of right now I'm using marathon-lb-autoscale to atumatically adjust the number of running instances. What actually happens under the hood though is that marathon-lb-autoscale does perform a PUT request updating the instances property of the current application when req/s increases or decreaseas.
scale_list.each do |app,instances|
req = Net::HTTP::Put.new('/v2/apps/' + app)
if !#options.marathonCredentials.empty?
req.basic_auth(#options.marathonCredentials[0], #options.marathonCredentials[1])
end
req.content_type = 'application/json'
req.body = JSON.generate({'instances'=>instances})
Net::HTTP.new(#options.marathon.host, #options.marathon.port).start do |http|
http.request(req)
end
end
end
I don't know if the upgradeStrategy configuration is taken into account when scaling down instances. With default settings i cannot get the expected behaviour to work.
{
"upgradeStrategy": {
"minimumHealthCapacity": 1,
"maximumOverCapacity": 1
}
}
ACTUAL
instance 1
instance 2
PUT /v2/apps/my-app {instances: 3}
instance 1
instance 2
instance 3
PUT /v2/apps/my-app {instances: 2}
instance 1
instance 2
EXPECTED
instance 1
instance 2
PUT /v2/apps/my-app {instances: 3}
instance 1
instance 2
instance 3
PUT /v2/apps/my-app {instances: 2}
instance 2
instance 3
One can specify a killSelection directly inside the application's config and specify YoungestFirst which kills youngest tasks first or OldestFirst which kills the oldest ones first.
Reference: https://mesosphere.github.io/marathon/docs/configure-task-handling.html

Spring integration Scatter-Gather pattern with JMS transport

I need to implement the following architecture:
I have data that must be sent to systems (Some external application ) using JMS.
Depending on the data you need to send only to the necessary systems (For example, if the number of systems is 4, then you can send from 1 to 4 )
It is necessary to wait for a response from the systems to which the messages were sent, after receiving all the answers, it is required to process the received data (or to process at least one timeout)
The correlation id is contained in the header of both outgoing and incoming JMS messages
Each new such process can be started asynchronously and in parallel
Now I have it implemented only with the help of Spring JMS. I synchronize the threads manually, also manually I manage the thread pools.
The correlation ids and information about the systems in which messages were sent are stored as a state and update it after receiving new messages, etc.
But I want to simplify the logic and use Spring-integration Java DSL, Scatter gather pattern (Which is just my case) and other useful Spring features.
Can you help me show an example of how such an architecture can be implemented with the help of Spring-integration/IntregrationFlow?
Here is some sample from our test-cases:
#Bean
public IntegrationFlow scatterGatherFlow() {
return f -> f
.scatterGather(scatterer -> scatterer
.applySequence(true)
.recipientFlow(m -> true, sf -> sf.handle((p, h) -> Math.random() * 10))
.recipientFlow(m -> true, sf -> sf.handle((p, h) -> Math.random() * 10))
.recipientFlow(m -> true, sf -> sf.handle((p, h) -> Math.random() * 10)),
gatherer -> gatherer
.releaseStrategy(group ->
group.size() == 3 ||
group.getMessages()
.stream()
.anyMatch(m -> (Double) m.getPayload() > 5)),
scatterGather -> scatterGather
.gatherTimeout(10_000));
}
So, there is the parts:
scatterer - to send messages to recipients. In your case all those JMS services. That can be a scatterChannel though. Typically PublishSubscribeChannel, so Scatter-Gather might not know subscrbibers in adavance.
gatherer - well, it is just an aggregator with all its possible options.
scatterGather - is just for convenience for the direct properties of the ScatterGatherHandler and common endpoint options.

How to pass parameters to forked test processes in Gradle?

I am running UI tests in parallel with Gradle:
build.gradle:
test {
maxParallelForks = 4
}
I want to pass a number of process (1,2,3,4) to each of these forked processes?
The goal is to make these tests use different virtual displays. So that these UI tests would not conflict with each other. Ideally I would like to pass system property DISPLAY=:15:1, DISPLAY=:15:2, DISPLAY=:15:3, DISPLAY=:15:4 to forked processes.
I hate to be a necroposter, but I had same problem (how to utilize forked process as index) and managed to solve it, so maybe solution below will help someone.
From mrhaki's answer on similar question, Gradle will propagate unique org.gradle.test.worker property to each forked JVM. Unfortunately for me, these worker properties do not start from 1 (so cannot be directly used as index values), but I figured out that they come in sequence. E.g. for maxParallelForks = 3 there can be org.gradle.test.worker = 33, org.gradle.test.worker = 34 and org.gradle.test.worker = 35 per forked process. Thanking to this, we can build indices from workers.
build.gradle:
test {
maxParallelForks = 4
systemProperty 'forks', maxParallelForks
}
somewhere in Java:
Integer maxForks = Integer.valueOf(System.getProperty("forks", "1"));
Integer worker = Integer.valueOf(System.getProperty("org.gradle.test.worker", "1"));
int index = (worker % maxForks) + 1; // index will be 1, or 2, or 3 or 4

Resources