I have a class which wraps up a map. The map is read/written by an Add() and isUpwardTrade() methods as indicated below.
Do you see any thread safety issues by synchronizing the whole methods?
How would you change the following implementation (i.e. would you use a concurrentHashMap or something else? ) to improve the performance in a multi-threaded context?
private Map<String, List<Double>> priceTable = new HashMap<String, List<Double>>();
private AutoTrader autoTrader;
public PriceTable(AutoTrader autoTrader) {
this.autoTrader = autoTrader;
}
public synchronized void add(Price price) {
if (!priceTable.containsKey(price.getProductName())){
List<Double> prices = new ArrayList<Double>();
Double pValue = price.getPrice();
prices.add(pValue);
priceTable.put(price.getProductName(), prices);
}else{
Double pValue = price.getPrice();
priceTable.get(price.getProductName()).add(pValue);
}
if (isUpwardTrend(price, priceTable)) {
notifyAutoTrader(price);
}
}
private void notifyAutoTrader(Price price) {
autoTrader.onUpwardTrendEvent(price);
}
private synchronized boolean isUpwardTrend(Price price, Map<String, List<Double>> pricesTable) {
List<Double> prices = priceTable.get(price.getProductName());
if ( prices.size() >= 4){
if ( calcAvg(prices) > prices.get(prices.size() - 4) )
return true;
}
return false;
}
Hashmap is not threadsafe.
You should use ConcurrentHashMap or Hashtable.
Related
I built a stream that does a windowed join, when deploying on production all is fine in terms of memory and performance.
However, I needed Deduplication, hence implemented a Transformer that does that with the help of a WindowStore.
After deploying it, we are getting the data results that was expected, but memory keeps growing until the pod crashes with OOM.
After doing research I implemented many tricks to reduce memory usage, but they didn't help, below is the code.
It's clear to me that using the WindowStore is causing this issue, but how to limit it?
The Store:
var storeBuilder = Stores.windowStoreBuilder(
Stores.persistentWindowStore(
storeName,
Duration.ofSeconds(6),
Duration.ofSeconds(5),
false
),
Serdes.String(),
SerdeFactory.JsonSerde(valueDataClass)
);
The stream:
var leftStream = builder.stream("leftTopic").filter(...);
var rightStream = builder.stream("rightTopic").filter(...);
leftStream.join(rightStream, joiner, JoinWindows
.of(Duration.ofSeconds(5))
.grace(Duration.ofSeconds(1))
.until(
Duration.ofSeconds(6)
)
.transformValues(
() ->
new DeduplicationTransformer<>(
storeName,
Duration.ofSeconds(6),
(key, value) -> value.id
),
storeName
)
.filter((k, v) -> v != null)
.to("targetTopic");
Deduplication Transformer:
public class DeduplicationTransformer<K, V extends StreamModel>
implements ValueTransformerWithKey<K, V, V> {
private ProcessorContext context;
private String storeName;
private WindowStore<K, V> eventIdStore;
private final long leftDurationMs;
private final KeyValueMapper<K, V, K> idExtractor;
public DeduplicationTransformer(
String storeName,
long maintainDurationPerEventInMs,
final KeyValueMapper<K, V, K> idExtractor
) {
if (maintainDurationPerEventInMs < 2) {
throw new IllegalArgumentException(
"maintain duration per event must be > 1"
);
}
leftDurationMs = maintainDurationPerEventInMs;
this.idExtractor = idExtractor;
this.storeName = storeName;
}
#Override
public void init(final ProcessorContext context) {
this.context = context;
eventIdStore = (WindowStore<K, V>) context.getStateStore(storeName);
Duration interval = Duration.ofMillis(leftDurationMs);
this.context.schedule(
interval,
PunctuationType.WALL_CLOCK_TIME,
timestamp -> {
Instant from = Instant.ofEpochMilli(
System.currentTimeMillis() - leftDurationMs * 2
);
Instant to = Instant.ofEpochMilli(
System.currentTimeMillis() - leftDurationMs
);
KeyValueIterator<Windowed<K>, V> iterator = eventIdStore.fetchAll(
from,
to
);
while (iterator.hasNext()) {
KeyValue<Windowed<K>, V> entry = iterator.next();
eventIdStore.put(entry.key.key(), null, entry.key.window().start());
}
iterator.close();
context.commit();
}
);
}
#Override
public V transform(final K key, final V value) {
try {
final K eventId = idExtractor.apply(key, value);
if (eventId == null) {
return value;
} else {
final V output;
if (isDuplicate(eventId)) {
output = null;
} else {
output = value;
rememberNewEvent(eventId, value, context.timestamp());
}
return output;
}
} catch (Exception e) {
return null;
}
}
private boolean isDuplicate(final K eventId) {
final long eventTime = context.timestamp();
final WindowStoreIterator<V> timeIterator = eventIdStore.fetch(
eventId,
eventTime - leftDurationMs,
eventTime
);
final boolean isDuplicate = timeIterator.hasNext();
timeIterator.close();
return isDuplicate;
}
private void rememberNewEvent(final K eventId, V v, final long timestamp) {
eventIdStore.put(eventId, v, timestamp);
}
#Override
public void close() {}
}
RocksDB config:
public class BoundedMemoryRocksDBConfig implements RocksDBConfigSetter {
private Cache cache = new LRUCache(5 * 1024 * 1204L);
private Filter filter = new BloomFilter();
private WriteBufferManager writeBufferManager = new WriteBufferManager(
4 * 1024 * 1204L,
cache
);
#Override
public void setConfig(
final String storeName,
final Options options,
final Map<String, Object> configs
) {
BlockBasedTableConfig tableConfig = (BlockBasedTableConfig) options.tableFormatConfig();
tableConfig.setBlockCache(cache);
tableConfig.setCacheIndexAndFilterBlocks(true);
options.setWriteBufferManager(writeBufferManager);
tableConfig.setCacheIndexAndFilterBlocksWithHighPriority(false);
tableConfig.setPinTopLevelIndexAndFilter(true);
tableConfig.setBlockSize(4 * 1024L);
options.setMaxWriteBufferNumber(1);
options.setWriteBufferSize(1024 * 1024L);
options.setTableFormatConfig(tableConfig);
}
#Override
public void close(final String storeName, final Options options) {
cache.close();
filter.close();
}
}
Config:
props.put(
StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG,
0
);
props.put(
StreamsConfig.ROCKSDB_CONFIG_SETTER_CLASS_CONFIG,
BoundedMemoryRocksDBConfig.class
);
Things I've tried so far:
Using a bounded RocksDB config setter
Using jemalloc instead of malloc
Reducing the retention period to 5 seconds
Reducing the number of partitions of the topics (this only slowed the rate of the memory leak)
used in-memory stores instead of persistent, and memory was very stable, but the app startup takes around 10 minutes on each deployment.
I am trying partition a list into multiple sublists based on a condition that sum of a particular field should be less than 'x'. Below is sameple code:
public class TestGrouping {
public static class Transaction{
String txnId;
String comment;
Amount amount;
public Transaction(String txnId, String comment, Amount amount) {
this.txnId = txnId;
this.comment = comment;
this.amount = amount;
}
}
public static class Amount{
String amountValue;
public Amount(String amountValue) {
this.amountValue = amountValue;
}
}
public static void main(String[] args) {
List<Transaction> transactionList = new ArrayList<>();
Transaction txn1 = new Transaction("T1","comment1",new Amount("81"));
Transaction txn2 = new Transaction("T2","comment2",new Amount("5"));
Transaction txn3 = new Transaction("T3","comment3",new Amount("12"));
Transaction txn4 = new Transaction("T4","comment4",new Amount("28"));
transactionList.add(txn1);
transactionList.add(txn2);
transactionList.add(txn3);
transactionList.add(txn4);
//below is what i thought might work
// transactionList.stream().collect(groupingBy (r->Collectors.summingInt(Integer.valueOf(r.amount.amountValue)),Collectors.mapping(t -> t, toList())));
}
The goal is to split the transactionList into 2 (or more) sublists - where the sum of 'amount' is less than 100. So i could have a sublist have only txn1 - having amount as 81; and the other sublist have txn2, txn3, txn4 (as sum of these is less 100). Other possibility is - have sublist1 having txn1, txn2, txn3; and another sublist with just txn4. Not trying to create the most 'optimal' lists basically, just that sum of amounts should be less than 100.
Any clues?
The Idea is to use a custom collector to generate a list of pair(amountSum, transactions), the list should initialy be sorted. The accumulator method (here Accumulator.accept) do the grouping logic, I didn't implement combine because there is no need for a combiner in non parallel stream.
Bellow the code snippet, hope it helps.
public class TestStream {
public class Transaction {
String txnId;
String comment;
Amount amount;
public Transaction(String txnId, String comment, Amount amount) {
this.txnId = txnId;
this.comment = comment;
this.amount = amount;
}
}
public class Amount {
String amountValue;
public Amount(String amountValue) {
this.amountValue = amountValue;
}
}
#Test
public void test() {
List<Transaction> transactionList = new ArrayList<>();
Transaction txn1 = new Transaction("T1", "comment1", new Amount("81"));
Transaction txn2 = new Transaction("T2", "comment2", new Amount("5"));
Transaction txn3 = new Transaction("T3", "comment3", new Amount("12"));
Transaction txn4 = new Transaction("T4", "comment4", new Amount("28"));
transactionList.add(txn1);
transactionList.add(txn2);
transactionList.add(txn3);
transactionList.add(txn4);
transactionList.stream()
.sorted(Comparator.comparing(tr -> Integer.valueOf(tr.amount.amountValue)))
.collect(ArrayList<Pair<Integer, List<Transaction>>>::new, Accumulator::accept, (x, y) -> {
})
.forEach(t -> {
System.out.println(t.left);
});
}
static class Accumulator {
public static void accept(List<Pair<Integer, List<Transaction>>> lPair, Transaction tr) {
Pair<Integer, List<Transaction>> lastPair = lPair.isEmpty() ? null : lPair.get(lPair.size() - 1);
Integer amount = Integer.valueOf(tr.amount.amountValue);
if (Objects.isNull(lastPair) || lastPair.left + amount > 100) {
lPair.add(
new TestStream().new Pair<Integer, List<Transaction>>(amount,
Arrays.asList(tr)));
} else {
List<Transaction> newList = new ArrayList<>();
newList.addAll(lastPair.getRight());
newList.add(tr);
lastPair.setLeft(lastPair.getLeft() + amount);
lastPair.setRight(newList);
}
}
}
class Pair<T, V> {
private T left;
private V right;
/**
*
*/
public Pair(T left, V right) {
this.left = left;
this.right = right;
}
public V getRight() {
return right;
}
public T getLeft() {
return left;
}
public void setLeft(T left) {
this.left = left;
}
public void setRight(V right) {
this.right = right;
}
}
}
I am using Query DSL and want my result set to return a Page Object. Is there a way to do that in Query DSL? If so Whats my query going to be looking like?
I am using JPAQuery and I have my QClasses
The Method structure is this
public Page<Object> searchPerson(String name,String phone){
Page<Object> results=null;
JPQLQuery query = new JPAQuery(entityManager);
QPerson person = QPerson.person;
//I am assuming my query would go here
results = query.from(person). ?????
return results;}
Help!
Here is my implementation for Paging with QueryDSL. A PageRequest defines the parameters for our query (limit and page):
public class PageRequest {
protected Long page = 1l;// 1 is the first page
protected Integer limit = 10;
public PageRequest(Long page, Integer limit) {
this.limit = limit;
this.page = page;
}
public Long getPage() {
return page;
}
public Integer getLimit() {
return limit;
}
public Long getOffset() {
return (page - 1l) * limit;
}
}
The Page Class contains the result (here the attribute objects) of the query and could implement methods to create nice paging links.
public class Page<T> extends PageRequest {
protected Collection<T> objects;
private Long totalCount;
private Long pageCount;
private Boolean hasPageLinkPrev;
private Boolean hasPageLinkNext;
private Collection<Long> pageLinks;
public Page(Long page, Integer limit, Long totalCount, Collection<T> objects) {
this.page = page;
this.limit = limit;
this.totalCount = totalCount;
this.objects = objects;
this.pageCount = totalCount / limit;
if (totalCount % limit > 0) {
this.pageCount = this.pageCount + 1;
}
this.hasPageLinkPrev = page > 1;
this.hasPageLinkNext = page < this.pageCount;
this.pageLinks = new ArrayList<>();
if (this.pageCount != 1) {
this.pageLinks.add(1l);
if (page > 3l) {
this.pageLinks.add(-1l);
}
if (page > 2l) {
if (page.equals(this.pageCount) && this.pageCount > 3l) {
this.pageLinks.add(page - 2l);
}
this.pageLinks.add(page - 1l);
}
if (page != 1l && !page.equals(this.pageCount)) {
this.pageLinks.add(page);
}
if (page < this.pageCount - 1l) {
this.pageLinks.add(page + 1l);
if (page == 1l && this.pageCount > 3l) {
this.pageLinks.add(page + 2l);
}
}
if (page < this.pageCount - 2l) {
this.pageLinks.add(-1l);
}
this.pageLinks.add(this.pageCount);
}
}
public Page(PageRequest pageRequest, Long totalCount, Collection<T> objects) {
this(pageRequest.getPage(), pageRequest.getLimit(), totalCount, objects);
}
public Long getTotalCount() {
return this.totalCount;
}
public Long getPageCount() {
return this.pageCount;
}
public Long getPage() {
return this.page;
}
public Integer getLimit() {
return this.limit;
}
public Boolean getHasPageLinkPrev() {
return this.hasPageLinkPrev;
}
public Boolean getHasPageLinkNext() {
return hasPageLinkNext;
}
public Collection<Long> getPageLinks() {
return pageLinks;
}
public Collection<T> getObjects() {
return objects;
}
}
With that stuf it is not very hard to create the query and put the results in our page object. One possibility is to write a generic method in the base class of the repository classes:
protected <T> Page<T> getPage(JPQLQuery<T> query, PageRequest pageRequest) {
List<T> resultList = query
.offset(pageRequest.getOffset())
.limit(pageRequest.getLimit())
.fetch();
Long totalCount = query.fetchCount();
return new Page<T>(pageRequest, totalCount, resultList);
}
In your repository class you create your query for the particular use case. Then you can use the method getPage to get the results in a Page.
public Page<Person> searchPerson(String name,
String phone,
PageRequest request){
Page<Person> results=null;
JPQLQuery<Person> query = new JPAQuery<>(entityManager);
QPerson person = QPerson.person;
query = query.from(person)
.where(person.name.eq(name)
.and(person.phone.eq(phone)));
return getPage(query, request);
}
The solution for the above was using BooleanBuilder implemented on the method above and changed the method name to return Person Object.
Please check BooleanBuilder
QPerson person= QPerson.person;
BooleanBuilder builder = this.getBuilder(name, phone,page, pageSize, sortFlag, sortItem);
PageRequest pg = getPRequest(page, pageSize);
Page<Person> pages personRepo.findAll(builder,pg);
return pages;
and then Implemented getBuilder Method for it which is the below one
public BooleanBuilder getBuilder(String name, String phone, Integer page, Integer pageSize, String sortFlag, String sortItem) {
QPerson person = QPerson.person;
BooleanBuilder builder = new BooleanBuilder();
builder.and(person.name.startsWith(name));
return builder;
}
and finally implemented the getPRequest Method as the following
public PageRequest getPRequest(Integer page, Integer pageSize) {
return new PageRequest(page, pageSize);
}
Oooooh Happy Days!
I have a number from 1 to 10,000 stored in an array of long. When adding them sequentially it will give a result of 50,005,000.
I have writing an Spliterator where if a size of array is longer than 1000, it will be splitted to another array.
Here is my code. But when I run it, the result from addition is far greater than 50,005,000. Can someone tell me what is wrong with my code?
Thank you so much.
import java.util.Arrays;
import java.util.Optional;
import java.util.Spliterator;
import java.util.function.Consumer;
import java.util.stream.LongStream;
import java.util.stream.Stream;
import java.util.stream.StreamSupport;
public class SumSpliterator implements Spliterator<Long> {
private final long[] numbers;
private int currentPosition = 0;
public SumSpliterator(long[] numbers) {
super();
this.numbers = numbers;
}
#Override
public boolean tryAdvance(Consumer<? super Long> action) {
action.accept(numbers[currentPosition++]);
return currentPosition < numbers.length;
}
#Override
public long estimateSize() {
return numbers.length - currentPosition;
}
#Override
public int characteristics() {
return SUBSIZED;
}
#Override
public Spliterator<Long> trySplit() {
int currentSize = numbers.length - currentPosition;
if( currentSize <= 1_000){
return null;
}else{
currentPosition = currentPosition + 1_000;
return new SumSpliterator(Arrays.copyOfRange(numbers, 1_000, numbers.length));
}
}
public static void main(String[] args) {
long[] twoThousandNumbers = LongStream.rangeClosed(1, 10_000).toArray();
Spliterator<Long> spliterator = new SumSpliterator(twoThousandNumbers);
Stream<Long> stream = StreamSupport.stream(spliterator, false);
System.out.println( sumValues(stream) );
}
private static long sumValues(Stream<Long> stream){
Optional<Long> optional = stream.reduce( ( t, u) -> t + u );
return optional.get() != null ? optional.get() : Long.valueOf(0);
}
}
I have the strong feeling that you didn’t get the purpose of splitting right. It’s not meant to copy the underlying data but just provide access to a range of it. Keep in mind that spliterators provide read-only access. So you should pass the original array to the new spliterator and configure it with an appropriate position and length instead of copying the array.
But besides the inefficiency of copying, the logic is obviously wrong: You pass Arrays.copyOfRange(numbers, 1_000, numbers.length) to the new spliterator, so the new spliterator contains the elements from position 1000 to the end of the array and you advance the current spliterator’s position by 1000, so the old spliterator covers the elements from currentPosition + 1_000 to the end of the array. So both spliterators will cover elements at the end of the array while at the same time, depending on the previous value of currentPosition, elements at the beginning might not be covered at all. So when you want to advance the currentPosition by 1_000 the skipped range is expressed by Arrays.copyOfRange(numbers, currentPosition, 1_000) instead, referring to the currentPosition before advancing.
It’s should also be noted, that a spliterator should attempt to split balanced, that is, in the middle if the size is known. So splitting off thousand elements is not the right strategy for an array.
Further, your tryAdvance method is wrong. It should not test after calling the consumer but before, returning false if there are no more elements, which also implies that the consumer has not been called.
Putting it all together, the implementation may look like
public class MyArraySpliterator implements Spliterator<Long> {
private final long[] numbers;
private int currentPosition, endPosition;
public MyArraySpliterator(long[] numbers) {
this(numbers, 0, numbers.length);
}
public MyArraySpliterator(long[] numbers, int start, int end) {
this.numbers = numbers;
currentPosition=start;
endPosition=end;
}
#Override
public boolean tryAdvance(Consumer<? super Long> action) {
if(currentPosition < endPosition) {
action.accept(numbers[currentPosition++]);
return true;
}
return false;
}
#Override
public long estimateSize() {
return endPosition - currentPosition;
}
#Override
public int characteristics() {
return ORDERED|NONNULL|SIZED|SUBSIZED;
}
#Override
public Spliterator<Long> trySplit() {
if(estimateSize()<=1000) return null;
int middle = (endPosition + currentPosition)>>>1;
MyArraySpliterator prefix
= new MyArraySpliterator(numbers, currentPosition, middle);
currentPosition=middle;
return prefix;
}
}
But of course, it’s recommended to provide a specialized forEachRemaining implementation, where possible:
#Override
public void forEachRemaining(Consumer<? super Long> action) {
int pos=currentPosition, end=endPosition;
currentPosition=end;
for(;pos<end; pos++) action.accept(numbers[pos]);
}
As a final note, for the task of summing longs from an array, a Spliterator.OfLong and a LongStream is preferred and that work has already been done, see Arrays.spliterator() and LongStream.sum(), making the whole task as simple as Arrays.stream(numbers).sum().
In this case just odd lines have meaningful data and there is no character that uniquely identifies those lines. My intention is to get something equivalent to the following example:
Stream<DomainObject> res = Files.lines(src)
.filter(line -> isOddLine())
.map(line -> toDomainObject(line))
Is there any “clean” way to do it, without sharing global state?
No, there's no way to do this conveniently with the API. (Basically the same reason as to why there is no easy way of having a zipWithIndex, see Is there a concise way to iterate over a stream with indices in Java 8?).
You can still use Stream, but go for an iterator:
Iterator<String> iter = Files.lines(src).iterator();
while (iter.hasNext()) {
iter.next(); // discard
toDomainObject(iter.next()); // use
}
(You might want to use try-with-resource on that stream though.)
A clean way is to go one level deeper and implement a Spliterator. On this level you can control the iteration over the stream elements and simply iterate over two items whenever the downstream requests one item:
public class OddLines<T> extends Spliterators.AbstractSpliterator<T>
implements Consumer<T> {
public static <T> Stream<T> oddLines(Stream<T> source) {
return StreamSupport.stream(new OddLines(source.spliterator()), false);
}
private static long odd(long l) { return l==Long.MAX_VALUE? l: (l+1)/2; }
Spliterator<T> originalLines;
OddLines(Spliterator<T> source) {
super(odd(source.estimateSize()), source.characteristics());
originalLines=source;
}
#Override
public boolean tryAdvance(Consumer<? super T> action) {
if(originalLines==null || !originalLines.tryAdvance(action))
return false;
if(!originalLines.tryAdvance(this)) originalLines=null;
return true;
}
#Override
public void accept(T t) {}
}
Then you can use it like
Stream<DomainObject> res = OddLines.oddLines(Files.lines(src))
.map(line -> toDomainObject(line));
This solution has no side effects and retains most advantages of the Stream API like the lazy evaluation. However, it should be clear that it hasn’t a useful semantics for unordered stream processing (beware about the subtle aspects like using forEachOrdered rather than forEach when performing a terminal action on all elements) and while supporting parallel processing in principle, it’s unlikely to be very efficient…
As aioobe said, there isn't a convenient way to do this, but there are several inconvenient ways. :-)
Here's another spliterator-based approach. Unlike Holger's, which wraps another spliterator, this one does the I/O itself. This gives greater control over things like ordering, but it also means that it has to deal with IOException and close handling. I also threw in a Predicate parameter that lets you get a crack at which lines get passed through.
static class LineSpliterator extends Spliterators.AbstractSpliterator<String>
implements AutoCloseable {
final BufferedReader br;
final LongPredicate pred;
long count = 0L;
public LineSpliterator(Path path, LongPredicate pred) throws IOException {
super(Long.MAX_VALUE, Spliterator.ORDERED);
br = Files.newBufferedReader(path);
this.pred = pred;
}
#Override
public boolean tryAdvance(Consumer<? super String> action) {
try {
String s;
while ((s = br.readLine()) != null) {
if (pred.test(++count)) {
action.accept(s);
return true;
}
}
return false;
} catch (IOException ioe) {
throw new UncheckedIOException(ioe);
}
}
#Override
public void close() {
try {
br.close();
} catch (IOException ioe) {
throw new UncheckedIOException(ioe);
}
}
public static Stream<String> lines(Path path, LongPredicate pred) throws IOException {
LineSpliterator ls = new LineSpliterator(path, pred);
return StreamSupport.stream(ls, false)
.onClose(() -> ls.close());
}
}
You'd use it within a try-with-resources to ensure that the file is closed, even if an exception occurs:
static void printOddLines() throws IOException {
try (Stream<String> lines = LineSpliterator.lines(PATH, x -> (x & 1L) == 1L)) {
lines.forEach(System.out::println);
}
}
You can do this with a custom spliterator:
public class EvenOdd {
public static final class EvenSpliterator<T> implements Spliterator<T> {
private final Spliterator<T> underlying;
boolean even;
public EvenSpliterator(Spliterator<T> underlying, boolean even) {
this.underlying = underlying;
this.even = even;
}
#Override
public boolean tryAdvance(Consumer<? super T> action) {
if (even) {
even = false;
return underlying.tryAdvance(action);
}
if (!underlying.tryAdvance(t -> {})) {
return false;
}
return underlying.tryAdvance(action);
}
#Override
public Spliterator<T> trySplit() {
if (!hasCharacteristics(SUBSIZED)) {
return null;
}
final Spliterator<T> newUnderlying = underlying.trySplit();
if (newUnderlying == null) {
return null;
}
final boolean oldEven = even;
if ((newUnderlying.estimateSize() & 1) == 1) {
even = !even;
}
return new EvenSpliterator<>(newUnderlying, oldEven);
}
#Override
public long estimateSize() {
return underlying.estimateSize()>>1;
}
#Override
public int characteristics() {
return underlying.characteristics();
}
}
public static void main(String[] args) {
final EvenSpliterator<Integer> spliterator = new EvenSpliterator<>(IntStream.range(1, 100000).parallel().mapToObj(Integer::valueOf).spliterator(), false);
final List<Integer> result = StreamSupport.stream(spliterator, true).parallel().collect(Collectors.toList());
final List<Integer> expected = IntStream.range(1, 100000 / 2).mapToObj(i -> i * 2).collect(Collectors.toList());
if (result.equals(expected)) {
System.out.println("Yay! Expected result.");
}
}
}
Following the #aioobe algorithm, here's another spliterator-based approach, as proposed by #Holger but more concise, even if less effective.
public static <T> Stream<T> filterOdd(Stream<T> src) {
Spliterator<T> iter = src.spliterator();
AbstractSpliterator<T> res = new AbstractSpliterator<T>(Long.MAX_VALUE, Spliterator.ORDERED)
{
#Override
public boolean tryAdvance(Consumer<? super T> action) {
iter.tryAdvance(item -> {}); // discard
return iter.tryAdvance(action); // use
}
};
return StreamSupport.stream(res, false);
}
Then you can use it like
Stream<DomainObject> res = Files.lines(src)
filterOdd(res)
.map(line -> toDomainObject(line))