is there a way to achieve something similar like my code below, without having to avoid repeating myself while also keeping the processing low?
List<String> alist = new ArrayList<>();
alist.add("hello");
alist.add("hello2");
if(verbose) {
alist.stream()
.peek(System.out::println)
.forEach(/*dostuff*/);
}
else {
alist.stream().forEach(/*dostuff*/);
}
As seen above, I'm forced to repeat myself by handling the stream in either the if or else case which looks kind of ugly if the stream becomes a bit longer.
There's the other option which in my opinion looks cleaner but should be worse performance wise as it compares the verbose-boolean for every item in the list.
List<String> alist = new ArrayList<>();
alist.add("helllo");
alist.add("hello2");
alist.stream()
.peek(this::printVerbose)
.forEach(/*dostuff*/);
}
private void printVerbose(String v) {
if(verbose) {
System.out.println(v);
}
}
You could do something like this :
Stream<Integer> stream = alist.stream();
if(verbose) {
stream = stream
.peek(System.out::println);
}
stream.forEach(/*dostuff*/);
There's another way that checks the flag only once, when creating the Consumer to be passed to peek. You need the following method:
public static <T> Consumer<? super T> logIfNeeded(boolean verbose) {
return verbose ? System.out::println : t -> { };
}
Then, in your stream pipeline:
alist.stream()
.peek(logIfNeeded(verbose))
.forEach(/*dostuff*/);
The difference with your 2nd approach is that the flag is not checked for every element; the action is chosen eagerly, when the static method is called at stream pipeline declaration.
Related
Given a Flux or a Mono from project reactor is a there a way to get the Flux or Mono to print out what the operator chain looks like. For example given the code below.
Fulx flux = Flux.just("a","b","c")
.map( v -> v.toUpperCase())
.log();
Is there some way to get the flux to print out a list of all the operators that are chained inside in the processing pipeline? Some nice ascii formatted text or a marble diagram?
printTheFlux(flux) should make a nice printout that show the structure of all the operators from the example above. I am not expecting to produce the code in the lambda's just a way to see what operators are chained together.
There is partial building blocks for doing this with the Scannable interface:
public String textRepresentation(Flux<?> flux) {
Scannable sc = Scannable.from(flux);
//scan the last operator in the chain and ask if it knows its parents
List<String> names = sc.parents().map(Scannable::operatorName)
.collect(Collectors.toList());
//as it traverses the chain from bottom to top, we need to reverse the order
Collections.reverse(names);
//need to also add the last operator
names.add(sc.operatorName());
return names.toString();
}
#Test
public void textRepresentationTest() {
Flux flux = Flux.just("a","b","c")
.map( v -> v.toUpperCase())
.log();
System.out.println(textRepresentation(flux));
}
Prints
[map, log]
Not all operators fully support it though (as you can see, the just source doesn't for instance).
Nice suggestion!
However, waiting for it, we can just have something like :
Disposable flux = Flux.just("a", "b", "c")
.map(String::toUpperCase)
.doOnNext(FluxUtil::print)
.subscribe();
Where FluxUtil::print is just a static method that you can write with different ways.
Here is the complete code works for me:
public class FluxUtil {
private static String s = "";
public static void main(String[] args) {
Disposable flux = Flux.just("a", "b", "c")
.map(String::toUpperCase)
.doOnNext(FluxUtil::print)
.subscribe();
}
private static Object print(Object o) {
s = !s.isEmpty() ? s.concat("->") : s;
s = s.concat(o.toString());
System.out.println(s);
return o;
}
}
I have a delegate that takes two numbers and creates a System.Windows.Point from them:
(x, y) => new Point(x,y);
I want to learn how can I use TPL Dataflow, specifically TransformBlock, to perform that.
I would have something like this:
ISourceBlock<double> Xsource;
ISourceBlock<double> Ysource;
ITargetBlock<Point> PointTarget;
// is there such a thing?
TransformBlock<double, double, Point> PointCreatorBlock;
// and also, how should I wire them together?
UPDATE:
Also, how can I assemble a network that joins more than two arguments? For example, let's say I have a method that receives eight arguments, each one coming from a different buffer, how can I create a block that knows when every argument has one instance available so that the object can be created?
I think what your looking for is the join block. Currently there is a two input and a three input variant, each outputs a tuple. These could be combined to create an eight parameter result. Another method would be creating a class to hold the parameters and using various block to process and construct the parameters class.
For the simple example of combining two ints for a point:
class MyClass {
BufferBlock<int> Xsource;
BufferBlock<int> Ysource;
JoinBlock<int, int> pointValueSource;
TransformBlock<Tuple<int, int>, Point> pointProducer;
public MyClass() {
CreatePipeline();
LinkPipeline();
}
private void CreatePipeline() {
Xsource = new BufferBlock<int>();
Ysource = new BufferBlock<int>();
pointValueSource = new JoinBlock<int, int>(new GroupingDataflowBlockOptions() {
Greedy = false
});
pointProducer = new TransformBlock<Tuple<int, int>, Point>((Func<Tuple<int,int>,Point>)ProducePoint,
new ExecutionDataflowBlockOptions()
{ MaxDegreeOfParallelism = Environment.ProcessorCount });
}
private void LinkPipeline() {
Xsource.LinkTo(pointValueSource.Target1, new DataflowLinkOptions() {
PropagateCompletion = true
});
Ysource.LinkTo(pointValueSource.Target2, new DataflowLinkOptions() {
PropagateCompletion = true
});
pointValueSource.LinkTo(pointProducer, new DataflowLinkOptions() {
PropagateCompletion = true
});
//pointProduce.LinkTo(Next Step In processing)
}
private Point ProducePoint(Tuple<int, int> XandY) {
return new Point(XandY.Item1, XandY.Item2);
}
}
The JoinBlock will wait until it has data available on both of its input buffers to produce an output. Also, note that in this case if X's and Y's are arriving out of order at the input buffers care needs to be taken to re-sync them. The join block will only combine the first X and the first Y value it receives and so on.
I've got some working, inelegant code here:
The custom object is:
public class Person {
private int id;
public getId() { return this.id }
}
And I have a Class containing a Set<Person> allPersons containing all available subjects. I want to extract a new Set<Person> based upon one or more ID's of my choosing. I've written something which works using a nested enhanced for loop, but it strikes me as inefficient and will make a lot of unnecessary comparisons. I am getting used to working with Java 8, but can't quite figure out how to compare the Set against an Array. Here is my working, but verbose code:
public class MyProgram {
private Set<Person> allPersons; // contains 100 people with Ids 1-100
public Set<Person> getPersonById(int[] ids) {
Set<Person> personSet = new HashSet<>() //or any type of set
for (int i : ids) {
for (Person p : allPersons) {
if (p.getId() == i) {
personSet.add(p);
}
}
}
return personSet;
}
}
And to get my result, I'd call something along the lines of:
Set<Person> resultSet = getPersonById(int[] intArray = {2, 56, 66});
//resultSet would then contain 3 people with the corresponding ID
My question is how would i convert the getPersonById method to something using which streams allPersons and finds the ID match of any one of the ints in its parameter array? I thought of some filter operation, but since the parameter is an array, I can't get it to take just the one I want only.
The working answer to this is:
return allPersons.stream()
.filter(p -> (Arrays.stream(ids).anyMatch(i -> i == p.getId())) )
.collect(Collectors.toSet());
However, using the bottom half of #Flown's suggestion and if the program was designed to have a Map - it would also work (and work much more efficiently)
As you said, you can introduce a Stream::filter step using a Stream::anyMatch operation.
public Set<Person> getPersonById(int[] ids) {
Objects.requireNonNull(ids);
if (ids.length == 0) {
return Collections.emptySet();
}
return allPersons.stream()
.filter(p -> IntStream.of(ids).anyMatch(i -> i == p.getId()))
.collect(Collectors.toSet());
}
If the method is called more often, then it would be a good idea to map each Person to its id having a Map<Integer, Person>. The advantage is, that the lookup is much faster than iterating over the whole set of Person.Then your algorithm may look like this:
private Map<Integer, Person> idMapping;
public Set<Person> getPersonById(int[] ids) {
Objects.requireNonNull(ids);
return IntStream.of(ids)
.filter(idMapping::containsKey)
.mapToObj(idMapping::get)
.collect(Collectors.toSet());
}
Traditional functional language think of reduce in term of initial value and accumulator over the list. In Java things are more complicate as it require a BinaryOperator.
I would like to know if we have a better way of writing this kind of function:
public JsonObject growPath(final JsonObject obj) {
// paths is a list of string
return this.paths.stream().reduce(obj, (child, path) -> {
if (!child.containsKey(path) || !(child.get(path) instanceof JsonObject)) {
// We do override anything that is not an object since the path
// specify that it should be an object.
child.put(path, JsonObject.create());
}
return child.getObject(path);
} , (first, last) -> {
return last;
});
}
I would like to avoid the BinaryOperator argument. Should I use something different than reduce ?
You are using the wrong tool for the job. You are performing an action that modifies obj, which has nothing to do with reduction at all. If we ignore the modifying aspect, this operation is a left-fold, which Streams do not support (in general). You can only implement it using reduce, if the function is associative, which your function is not. So you best implement it without Streams:
public JsonObject growPath(JsonObject obj) {
for(String path: this.paths)
obj = (JsonObject)obj.compute(path,
(key,child)->child instanceof JsonObject? child: JsonObject.create());
return obj;
}
In my code, I have to iterate through a bunch of objectsof type T more than once. Since some objects may be quite large, I resorted to using a Supplier of Stream<T> instead of collecting them all in a list or set. The method is as follows:
private static Supplier<Stream<T>> streamSupplier(...) {
Iterator<T> iterator = ...;
Iterable<T> iterable = () -> iterator;
return () -> StreamSupport.stream(iterable.spliterator(), false);
}
and elsewhere in the code
Supplier<Stream<T>> supplier = streamSupplier(...);
List<T> ts = supplier.get().collect(Collectors.toList());
return ts.isEmpty(); // <-- true
The problem is that when I call the Supplier#get() method on the supplier returned by the above method, it is always empty. But when I changed my code to return a list, everything is working fine:
private static List<T> listSupplier(...) {
Iterator<T> iterator = ...;
Iterable<T> iterable = () -> iterator;
List<T> ts = Lists.newArrayList(iterable);
return ts; // <-- is populated correctly, NOT empty
}
I thought using a Supplier is the correct way to go if I want to use a stream repeatedly (so that I don't end up with a closed `Stream). What am I doing wrong?
You probably want to do something like this:
private static Supplier<Stream<T>> streamSupplier(...) {
return () -> {
Iterator<T> iterator = ...;
return StreamSupport.stream(Spliterators.spliteratorUnknownSize(iterator, 0), false);
};
}
This assumes that the line
Iterator<T> iterator = ...;
creates a fresh iterator each time, independently of any existing iterator.
Also note that you should adjust the way the Spliterator is created, for example, if the size is known, or if there are characteristics such as ordering that are important.
Finally, be very careful with doing
Iterable<T> iterable = () -> iterator;
This is close to being an anti-pattern. While it works in the type system -- calling the resulting Iterable's iterator() method will return an instance of Iterator -- it often won't work. The reason is that most code that uses Iterable instances assumes that it can call iterator() multiple times and get independent iterators. This doesn't do that; it captures the Iterator and returns the same Iterator instance each time. This will cause weird breakage similar to what you're seeing.
It looks like you are trying to create many streams from the same iterator.
Try this:
Iterable<Document> docIterable = () -> ...;
Where the ... is from Iterator<Document> docIterator = ...;
Also, why are you returning a Supplier<Stream<Document>> instead of just Stream<Document>?