Memory grows until OOM crash when using a WindowStore in Kafka streams - apache-kafka-streams

I built a stream that does a windowed join, when deploying on production all is fine in terms of memory and performance.
However, I needed Deduplication, hence implemented a Transformer that does that with the help of a WindowStore.
After deploying it, we are getting the data results that was expected, but memory keeps growing until the pod crashes with OOM.
After doing research I implemented many tricks to reduce memory usage, but they didn't help, below is the code.
It's clear to me that using the WindowStore is causing this issue, but how to limit it?
The Store:
var storeBuilder = Stores.windowStoreBuilder(
Stores.persistentWindowStore(
storeName,
Duration.ofSeconds(6),
Duration.ofSeconds(5),
false
),
Serdes.String(),
SerdeFactory.JsonSerde(valueDataClass)
);
The stream:
var leftStream = builder.stream("leftTopic").filter(...);
var rightStream = builder.stream("rightTopic").filter(...);
leftStream.join(rightStream, joiner, JoinWindows
.of(Duration.ofSeconds(5))
.grace(Duration.ofSeconds(1))
.until(
Duration.ofSeconds(6)
)
.transformValues(
() ->
new DeduplicationTransformer<>(
storeName,
Duration.ofSeconds(6),
(key, value) -> value.id
),
storeName
)
.filter((k, v) -> v != null)
.to("targetTopic");
Deduplication Transformer:
public class DeduplicationTransformer<K, V extends StreamModel>
implements ValueTransformerWithKey<K, V, V> {
private ProcessorContext context;
private String storeName;
private WindowStore<K, V> eventIdStore;
private final long leftDurationMs;
private final KeyValueMapper<K, V, K> idExtractor;
public DeduplicationTransformer(
String storeName,
long maintainDurationPerEventInMs,
final KeyValueMapper<K, V, K> idExtractor
) {
if (maintainDurationPerEventInMs < 2) {
throw new IllegalArgumentException(
"maintain duration per event must be > 1"
);
}
leftDurationMs = maintainDurationPerEventInMs;
this.idExtractor = idExtractor;
this.storeName = storeName;
}
#Override
public void init(final ProcessorContext context) {
this.context = context;
eventIdStore = (WindowStore<K, V>) context.getStateStore(storeName);
Duration interval = Duration.ofMillis(leftDurationMs);
this.context.schedule(
interval,
PunctuationType.WALL_CLOCK_TIME,
timestamp -> {
Instant from = Instant.ofEpochMilli(
System.currentTimeMillis() - leftDurationMs * 2
);
Instant to = Instant.ofEpochMilli(
System.currentTimeMillis() - leftDurationMs
);
KeyValueIterator<Windowed<K>, V> iterator = eventIdStore.fetchAll(
from,
to
);
while (iterator.hasNext()) {
KeyValue<Windowed<K>, V> entry = iterator.next();
eventIdStore.put(entry.key.key(), null, entry.key.window().start());
}
iterator.close();
context.commit();
}
);
}
#Override
public V transform(final K key, final V value) {
try {
final K eventId = idExtractor.apply(key, value);
if (eventId == null) {
return value;
} else {
final V output;
if (isDuplicate(eventId)) {
output = null;
} else {
output = value;
rememberNewEvent(eventId, value, context.timestamp());
}
return output;
}
} catch (Exception e) {
return null;
}
}
private boolean isDuplicate(final K eventId) {
final long eventTime = context.timestamp();
final WindowStoreIterator<V> timeIterator = eventIdStore.fetch(
eventId,
eventTime - leftDurationMs,
eventTime
);
final boolean isDuplicate = timeIterator.hasNext();
timeIterator.close();
return isDuplicate;
}
private void rememberNewEvent(final K eventId, V v, final long timestamp) {
eventIdStore.put(eventId, v, timestamp);
}
#Override
public void close() {}
}
RocksDB config:
public class BoundedMemoryRocksDBConfig implements RocksDBConfigSetter {
private Cache cache = new LRUCache(5 * 1024 * 1204L);
private Filter filter = new BloomFilter();
private WriteBufferManager writeBufferManager = new WriteBufferManager(
4 * 1024 * 1204L,
cache
);
#Override
public void setConfig(
final String storeName,
final Options options,
final Map<String, Object> configs
) {
BlockBasedTableConfig tableConfig = (BlockBasedTableConfig) options.tableFormatConfig();
tableConfig.setBlockCache(cache);
tableConfig.setCacheIndexAndFilterBlocks(true);
options.setWriteBufferManager(writeBufferManager);
tableConfig.setCacheIndexAndFilterBlocksWithHighPriority(false);
tableConfig.setPinTopLevelIndexAndFilter(true);
tableConfig.setBlockSize(4 * 1024L);
options.setMaxWriteBufferNumber(1);
options.setWriteBufferSize(1024 * 1024L);
options.setTableFormatConfig(tableConfig);
}
#Override
public void close(final String storeName, final Options options) {
cache.close();
filter.close();
}
}
Config:
props.put(
StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG,
0
);
props.put(
StreamsConfig.ROCKSDB_CONFIG_SETTER_CLASS_CONFIG,
BoundedMemoryRocksDBConfig.class
);
Things I've tried so far:
Using a bounded RocksDB config setter
Using jemalloc instead of malloc
Reducing the retention period to 5 seconds
Reducing the number of partitions of the topics (this only slowed the rate of the memory leak)
used in-memory stores instead of persistent, and memory was very stable, but the app startup takes around 10 minutes on each deployment.

Related

Java8 Stream Collectors - Splitting a list based on sum of values

I am trying partition a list into multiple sublists based on a condition that sum of a particular field should be less than 'x'. Below is sameple code:
public class TestGrouping {
public static class Transaction{
String txnId;
String comment;
Amount amount;
public Transaction(String txnId, String comment, Amount amount) {
this.txnId = txnId;
this.comment = comment;
this.amount = amount;
}
}
public static class Amount{
String amountValue;
public Amount(String amountValue) {
this.amountValue = amountValue;
}
}
public static void main(String[] args) {
List<Transaction> transactionList = new ArrayList<>();
Transaction txn1 = new Transaction("T1","comment1",new Amount("81"));
Transaction txn2 = new Transaction("T2","comment2",new Amount("5"));
Transaction txn3 = new Transaction("T3","comment3",new Amount("12"));
Transaction txn4 = new Transaction("T4","comment4",new Amount("28"));
transactionList.add(txn1);
transactionList.add(txn2);
transactionList.add(txn3);
transactionList.add(txn4);
//below is what i thought might work
// transactionList.stream().collect(groupingBy (r->Collectors.summingInt(Integer.valueOf(r.amount.amountValue)),Collectors.mapping(t -> t, toList())));
}
The goal is to split the transactionList into 2 (or more) sublists - where the sum of 'amount' is less than 100. So i could have a sublist have only txn1 - having amount as 81; and the other sublist have txn2, txn3, txn4 (as sum of these is less 100). Other possibility is - have sublist1 having txn1, txn2, txn3; and another sublist with just txn4. Not trying to create the most 'optimal' lists basically, just that sum of amounts should be less than 100.
Any clues?
The Idea is to use a custom collector to generate a list of pair(amountSum, transactions), the list should initialy be sorted. The accumulator method (here Accumulator.accept) do the grouping logic, I didn't implement combine because there is no need for a combiner in non parallel stream.
Bellow the code snippet, hope it helps.
public class TestStream {
public class Transaction {
String txnId;
String comment;
Amount amount;
public Transaction(String txnId, String comment, Amount amount) {
this.txnId = txnId;
this.comment = comment;
this.amount = amount;
}
}
public class Amount {
String amountValue;
public Amount(String amountValue) {
this.amountValue = amountValue;
}
}
#Test
public void test() {
List<Transaction> transactionList = new ArrayList<>();
Transaction txn1 = new Transaction("T1", "comment1", new Amount("81"));
Transaction txn2 = new Transaction("T2", "comment2", new Amount("5"));
Transaction txn3 = new Transaction("T3", "comment3", new Amount("12"));
Transaction txn4 = new Transaction("T4", "comment4", new Amount("28"));
transactionList.add(txn1);
transactionList.add(txn2);
transactionList.add(txn3);
transactionList.add(txn4);
transactionList.stream()
.sorted(Comparator.comparing(tr -> Integer.valueOf(tr.amount.amountValue)))
.collect(ArrayList<Pair<Integer, List<Transaction>>>::new, Accumulator::accept, (x, y) -> {
})
.forEach(t -> {
System.out.println(t.left);
});
}
static class Accumulator {
public static void accept(List<Pair<Integer, List<Transaction>>> lPair, Transaction tr) {
Pair<Integer, List<Transaction>> lastPair = lPair.isEmpty() ? null : lPair.get(lPair.size() - 1);
Integer amount = Integer.valueOf(tr.amount.amountValue);
if (Objects.isNull(lastPair) || lastPair.left + amount > 100) {
lPair.add(
new TestStream().new Pair<Integer, List<Transaction>>(amount,
Arrays.asList(tr)));
} else {
List<Transaction> newList = new ArrayList<>();
newList.addAll(lastPair.getRight());
newList.add(tr);
lastPair.setLeft(lastPair.getLeft() + amount);
lastPair.setRight(newList);
}
}
}
class Pair<T, V> {
private T left;
private V right;
/**
*
*/
public Pair(T left, V right) {
this.left = left;
this.right = right;
}
public V getRight() {
return right;
}
public T getLeft() {
return left;
}
public void setLeft(T left) {
this.left = left;
}
public void setRight(V right) {
this.right = right;
}
}
}

Sorting DataStream using Apache Flink

I am learning Flink and I started with a simple word count using DataStream. To enhance the processing I filtered the output to show only the results with 3 or more words found.
DataStream<Tuple2<String, Integer>> dataStream = env
.socketTextStream("localhost", 9000)
.flatMap(new Splitter())
.keyBy(0)
.timeWindow(Time.seconds(5))
.apply(new MyWindowFunction())
.sum(1)
.filter(word -> word.f1 >= 3);
I would like to create a WindowFunction to sort the output by the value of words found. The WindowFunction that I am trying to implement does not compile at all. I am struggling to define the apply method and the parameters of the WindowFunction interface.
public static class MyWindowFunction implements WindowFunction<
Tuple2<String, Integer>, // input type
Tuple2<String, Integer>, // output type
Tuple2<String, Integer>, // key type
TimeWindow> {
void apply(Tuple2<String, Integer> key, TimeWindow window, Iterable<Tuple2<String, Integer>> input, Collector<Tuple2<String, Integer>> out) {
String word = ((Tuple2<String, Integer>)key).f0;
Integer count = ((Tuple2<String, Integer>)key).f1;
.........
out.collect(new Tuple2<>(word, count));
}
}
I am updating this answer to use Flink 1.12.0. In order to sort the elements of a stream in I had to use a KeyedProcessFunction after counting the stream with a ReduceFunction. Then I had to set the parallelism of the very last transformation to 1 in order to not change the order of the elements that I sorted using KeyedProcessFunction. The sequence that I am using is socketTextStream -> flatMap -> keyBy -> reduce -> keyBy -> process -> print().setParallelism(1). Bellow it the example:
public class SocketWindowWordCountJava {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.socketTextStream("localhost", 9000)
.flatMap(new SplitterFlatMap())
.keyBy(new WordKeySelector())
.reduce(new SumReducer())
.keyBy(new WordKeySelector())
.process(new SortKeyedProcessFunction(3 * 1000))
.print().setParallelism(1);
String executionPlan = env.getExecutionPlan();
System.out.println("ExecutionPlan ........................ ");
System.out.println(executionPlan);
System.out.println("........................ ");
env.execute("Window WordCount sorted");
}
}
The UDF that I used to sort the stream is the SortKeyedProcessFunction which extends KeyedProcessFunction. I use a ValueState<List<Event>> listState of Event implements Comparable<Event> to have a sorted list as state. On the processElement method I register the time stamp that I added the event to the state context.timerService().registerProcessingTimeTimer(timeoutTime); and I collect the event at the onTimer method. I am also using a time window of 3 seconds here.
public class SortKeyedProcessFunction extends KeyedProcessFunction<String, Tuple2<String, Integer>, Event> {
private static final long serialVersionUID = 7289761960983988878L;
// delay after which an alert flag is thrown
private final long timeOut;
// state to remember the last timer set
private ValueState<List<Event>> listState = null;
private ValueState<Long> lastTime = null;
public SortKeyedProcessFunction(long timeOut) {
this.timeOut = timeOut;
}
#Override
public void open(Configuration conf) {
// setup timer and HLL state
ValueStateDescriptor<List<Event>> descriptor = new ValueStateDescriptor<>(
// state name
"sorted-events",
// type information of state
TypeInformation.of(new TypeHint<List<Event>>() {
}));
listState = getRuntimeContext().getState(descriptor);
ValueStateDescriptor<Long> descriptorLastTime = new ValueStateDescriptor<Long>(
"lastTime",
TypeInformation.of(new TypeHint<Long>() {
}));
lastTime = getRuntimeContext().getState(descriptorLastTime);
}
#Override
public void processElement(Tuple2<String, Integer> value, Context context, Collector<Event> collector) throws Exception {
// get current time and compute timeout time
long currentTime = context.timerService().currentProcessingTime();
long timeoutTime = currentTime + timeOut;
// register timer for timeout time
context.timerService().registerProcessingTimeTimer(timeoutTime);
List<Event> queue = listState.value();
if (queue == null) {
queue = new ArrayList<Event>();
}
Long current = lastTime.value();
queue.add(new Event(value.f0, value.f1));
lastTime.update(timeoutTime);
listState.update(queue);
}
#Override
public void onTimer(long timestamp, OnTimerContext ctx, Collector<Event> out) throws Exception {
// System.out.println("onTimer: " + timestamp);
// check if this was the last timer we registered
System.out.println("timestamp: " + timestamp);
List<Event> queue = listState.value();
Long current = lastTime.value();
if (timestamp == current.longValue()) {
Collections.sort(queue);
queue.forEach( e -> {
out.collect(e);
});
queue.clear();
listState.clear();
}
}
}
class Event implements Comparable<Event> {
String value;
Integer qtd;
public Event(String value, Integer qtd) {
this.value = value;
this.qtd = qtd;
}
public String getValue() { return value; }
public Integer getQtd() { return qtd; }
#Override
public String toString() {
return "Event{" +"value='" + value + '\'' +", qtd=" + qtd +'}';
}
#Override
public int compareTo(#NotNull Event event) {
return this.getValue().compareTo(event.getValue());
}
}
So when I use $ nc -lk 9000 and type the words on the console I see them in order on the output
...
Event{value='soccer', qtd=7}
Event{value='swim', qtd=5}
...
Event{value='basketball', qtd=9}
Event{value='soccer', qtd=8}
Event{value='swim', qtd=6}
The other UDFs are for the other transformations of the stream program and they are here for completeness.
public class SplitterFlatMap implements FlatMapFunction<String, Tuple2<String, Integer>> {
private static final long serialVersionUID = 3121588720675797629L;
#Override
public void flatMap(String sentence, Collector<Tuple2<String, Integer>> out) throws Exception {
for (String word : sentence.split(" ")) {
out.collect(Tuple2.of(word, 1));
}
}
}
public class WordKeySelector implements KeySelector<Tuple2<String, Integer>, String> {
#Override
public String getKey(Tuple2<String, Integer> value) throws Exception {
return value.f0;
}
}
public class SumReducer implements ReduceFunction<Tuple2<String, Integer>> {
#Override
public Tuple2<String, Integer> reduce(Tuple2<String, Integer> event1, Tuple2<String, Integer> event2) throws Exception {
return Tuple2.of(event1.f0, event1.f1 + event2.f1);
}
}
The .sum(1) method will do everything you need (no need for using apply()), as long as the Splitter class (which should be a FlatMapFunction) is emitting Tuple2<String, Integer> records, where String is the word, and Integer is always 1.
So then .sum(1) will do the aggregation for you. If you needed something different than what sum() does, you would typically use .reduce(new MyCustomReduceFunction()), as that's going to be the most efficient and scalable approach, in terms of not needing to buffer lots in memory.

Chronicle Queue V3. Can Entries be lost on data block roll-over?

I have an application that writes entries to a Chronicle Queue (V3) that also retains excerpt entry index values in other (Chronicle)Maps by way of providing indexed access in the queue. Sometimes we fail to find a given entry that we've earlier saved and I believe it maybe related to data-block roll-over.
Below is a stand-alone test program that reproduces such use-cases at small-scale. It repeatedly writes an entry and immediately attempts to find the resulting index value up using a separate ExcerptTailer. All is well for a while until the first data-block is used up and a second data file is assigned, then the retrieval failures start. If the data block size is increased to avoid roll-overs, then no entries are lost. Also using a small index data-block size, causing multiple index files to be created, doesn't cause a problem.
The test program also tries using an ExcerptListener running in parallel to see if the entries apparently 'lost' by the writer, are ever received by the reader thread - they're not. Also tries to re-read the resulting queue from start until end, which reconfirms that they really are lost.
Stepping thru' the code, I see that when looking up a 'missing entry', within AbstractVanillarExcerpt#index, it appears to successfully locate the correct VanillaMappedBytes object from the dataCache, but determines that there is no entry and the data-offset as the len == 0. In addition to the entries not being found, at some point after the problems start occurring post-roll-over, an NPE is thrown from within the VanillaMappedFile#fileChannel method due to it having been passed a null File path. The code-path assumes that when resolving a entry looked up successfully in the index that a file will always have been found, but isn't in this case.
Is it possible to reliably use Chronicle Queue across data-block roll-overs, and if so, what am I doing that maybe causing the problem I'm experiencing?
import java.io.IOException;
import java.util.Collection;
import java.util.HashSet;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.Set;
import org.junit.Before;
import org.junit.Test;
import net.openhft.affinity.AffinitySupport;
import net.openhft.chronicle.Chronicle;
import net.openhft.chronicle.ChronicleQueueBuilder;
import net.openhft.chronicle.ExcerptAppender;
import net.openhft.chronicle.ExcerptCommon;
import net.openhft.chronicle.ExcerptTailer;
import net.openhft.chronicle.VanillaChronicle;
public class ChronicleTests {
private static final int CQ_LEN = VanillaChronicle.Cycle.DAYS.length();
private static final long CQ_ENT = VanillaChronicle.Cycle.DAYS.entries();
private static final String ROOT_DIR = System.getProperty(ChronicleTests.class.getName() + ".ROOT_DIR",
"C:/Temp/chronicle/");
private static final String QDIR = System.getProperty(ChronicleTests.class.getName() + ".QDIR", "chronicleTests");
private static final int DATA_SIZE = Integer
.parseInt(System.getProperty(ChronicleTests.class.getName() + ".DATA_SIZE", "100000"));
// Chunk file size of CQ index
private static final int INDX_SIZE = Integer
.parseInt(System.getProperty(ChronicleTests.class.getName() + ".INDX_SIZE", "10000"));
private static final int Q_ENTRIES = Integer
.parseInt(System.getProperty(ChronicleTests.class.getName() + ".Q_ENTRIES", "5000"));
// Data type id
protected static final byte FSYNC_DATA = 1;
protected static final byte NORMAL_DATA = 0;
protected static final byte TH_START_DATA = -1;
protected static final byte TH_END_DATA = -2;
protected static final byte CQ_START_DATA = -3;
private static final long MAX_RUNTIME_MILLISECONDS = 30000;
private static String PAYLOAD_STRING = "1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz";
private static byte PAYLOAD_BYTES[] = PAYLOAD_STRING.getBytes();
private Chronicle _chronicle;
private String _cqPath = ROOT_DIR + QDIR;
#Before
public void init() {
buildCQ();
}
#Test
public void test() throws IOException, InterruptedException {
boolean passed = true;
Collection<Long> missingEntries = new LinkedList<Long>();
long sent = 0;
Thread listener = listen();
try {
listener.start();
// Write entries to CQ,
for (int i = 0; i < Q_ENTRIES; i++) {
long entry = writeQEntry(PAYLOAD_BYTES, (i % 100) == 0);
sent++;
// check each entry can be looked up
boolean found = checkEntry(i, entry);
if (!found)
missingEntries.add(entry);
passed &= found;
}
// Wait awhile for the listener
listener.join(MAX_RUNTIME_MILLISECONDS);
if (listener.isAlive())
listener.interrupt();
} finally {
if (listener.isAlive()) { // => exception raised so wait for listener
log("Give listener a chance....");
sleep(MAX_RUNTIME_MILLISECONDS);
listener.interrupt();
}
log("Sent: " + sent + " Received: " + _receivedEntries.size());
// Look for missing entries in receivedEntries
missingEntries.forEach(me -> checkMissingEntry(me));
log("All passed? " + passed);
// Try to find missing entries by searching from the start...
searchFromStartFor(missingEntries);
_chronicle.close();
_chronicle = null;
// Re-initialise CQ and look for missing entries again...
log("Re-initialise");
init();
searchFromStartFor(missingEntries);
}
}
private void buildCQ() {
try {
// build chronicle queue
_chronicle = ChronicleQueueBuilder.vanilla(_cqPath).cycleLength(CQ_LEN).entriesPerCycle(CQ_ENT)
.indexBlockSize(INDX_SIZE).dataBlockSize(DATA_SIZE).build();
} catch (IOException e) {
throw new InitializationException("Failed to initialize Active Trade Store.", e);
}
}
private long writeQEntry(byte dataArray[], boolean fsync) throws IOException {
ExcerptAppender appender = _chronicle.createAppender();
return writeData(appender, dataArray, fsync);
}
private boolean checkEntry(int seqNo, long entry) throws IOException {
ExcerptTailer tailer = _chronicle.createTailer();
if (!tailer.index(entry)) {
log("SeqNo: " + seqNo + " for entry + " + entry + " not found");
return false;
}
boolean isMarker = isMarker(tailer);
boolean isFsyncData = isFsyncData(tailer);
boolean isNormalData = isNormalData(tailer);
String type = isMarker ? "MARKER" : isFsyncData ? "FSYNC" : isNormalData ? "NORMALDATA" : "UNKNOWN";
log("Entry: " + entry + "(" + seqNo + ") is " + type);
return true;
}
private void log(String string) {
System.out.println(string);
}
private void searchFromStartFor(Collection<Long> missingEntries) throws IOException {
Set<Long> foundEntries = new HashSet<Long>(Q_ENTRIES);
ExcerptTailer tailer = _chronicle.createTailer();
tailer.toStart();
while (tailer.nextIndex())
foundEntries.add(tailer.index());
Iterator<Long> iter = missingEntries.iterator();
long foundCount = 0;
while (iter.hasNext()) {
long me = iter.next();
if (foundEntries.contains(me)) {
log("Found missing entry: " + me);
foundCount++;
}
}
log("searchFromStartFor Found: " + foundCount + " of: " + missingEntries.size() + " missing entries");
}
private void checkMissingEntry(long missingEntry) {
if (_receivedEntries.contains(missingEntry))
log("Received missing entry:" + missingEntry);
}
Set<Long> _receivedEntries = new HashSet<Long>(Q_ENTRIES);
private Thread listen() {
Thread returnVal = new Thread("Listener") {
public void run() {
try {
int receivedCount = 0;
ExcerptTailer tailer = _chronicle.createTailer();
tailer.toStart();
while (receivedCount < Q_ENTRIES) {
if (tailer.nextIndex()) {
_receivedEntries.add(tailer.index());
} else {
ChronicleTests.this.sleep(1);
}
}
log("listener complete");
} catch (IOException e) {
log("Interupted before receiving all entries");
}
}
};
return returnVal;
}
private void sleep(long interval) {
try {
Thread.sleep(interval);
} catch (InterruptedException e) {
// No action required
}
}
protected static final int THREAD_ID_LEN = Integer.SIZE / Byte.SIZE;
protected static final int DATA_TYPE_LEN = Byte.SIZE / Byte.SIZE;
protected static final int TIMESTAMP_LEN = Long.SIZE / Byte.SIZE;
protected static final int CRC_LEN = Long.SIZE / Byte.SIZE;
protected static long writeData(ExcerptAppender appender, byte dataArray[],
boolean fsync) {
appender.startExcerpt(DATA_TYPE_LEN + THREAD_ID_LEN + dataArray.length
+ CRC_LEN);
appender.nextSynchronous(fsync);
if (fsync) {
appender.writeByte(FSYNC_DATA);
} else {
appender.writeByte(NORMAL_DATA);
}
appender.writeInt(AffinitySupport.getThreadId());
appender.write(dataArray);
appender.writeLong(CRCCalculator.calcDataAreaCRC(appender));
appender.finish();
return appender.lastWrittenIndex();
}
protected static boolean isMarker(ExcerptCommon excerpt) {
if (isCqStartMarker(excerpt) || isStartMarker(excerpt) || isEndMarker(excerpt)) {
return true;
}
return false;
}
protected static boolean isCqStartMarker(ExcerptCommon excerpt) {
return isDataTypeMatched(excerpt, CQ_START_DATA);
}
protected static boolean isStartMarker(ExcerptCommon excerpt) {
return isDataTypeMatched(excerpt, TH_START_DATA);
}
protected static boolean isEndMarker(ExcerptCommon excerpt) {
return isDataTypeMatched(excerpt, TH_END_DATA);
}
protected static boolean isData(ExcerptTailer tailer, long index) {
if (!tailer.index(index)) {
return false;
}
return isData(tailer);
}
private static void movePosition(ExcerptCommon excerpt, long position) {
if (excerpt.position() != position)
excerpt.position(position);
}
private static void moveToFsyncFlagPos(ExcerptCommon excerpt) {
movePosition(excerpt, 0);
}
private static boolean isDataTypeMatched(ExcerptCommon excerpt, byte type) {
moveToFsyncFlagPos(excerpt);
byte b = excerpt.readByte();
if (b == type) {
return true;
}
return false;
}
protected static boolean isNormalData(ExcerptCommon excerpt) {
return isDataTypeMatched(excerpt, NORMAL_DATA);
}
protected static boolean isFsyncData(ExcerptCommon excerpt) {
return isDataTypeMatched(excerpt, FSYNC_DATA);
}
/**
* Check if this entry is Data
*
* #param excerpt
* #return true if the entry is data
*/
protected static boolean isData(ExcerptCommon excerpt) {
if (isNormalData(excerpt) || isFsyncData(excerpt)) {
return true;
}
return false;
}
}
The problem only occurs when initialising the data-block size with a value that is not a power of two. The built-in configurations on IndexedChronicleQueueBuilder (small(), medium(), large()) take care to initialise using powers of two which provided the clue as to the appropriate usage.
Notwithstanding the above response regarding support, which I totally appreciate, it would be useful if a knowledgeable Chronicle user could confirm that the integrity of Chronicle Queue depends on using a data-block size of a power of two.

Java 8 Stream , convert List<File> to Map<Integer, List<FIle>>

I have below code in traditional java loop. Would like to use Java 8 Stream instead.
I have a sorted list of files(Sorted by file size). I group these files together in a way that the total size of all files does not exceed the given max size and put them in a Map with the key 1,2,3,... so on. Here is the code.
List<File> allFilesSortedBySize = getListOfFiles();
Map<Integer, List<File>> filesGroupedByMaxSizeMap = new HashMap<Integer, List<File>>();
double totalLength = 0L;
int count = 0;
List<File> filesWithSizeTotalMaxSize = Lists.newArrayList();
//group the files to be zipped together as per maximum allowable size in a map
for (File file : allFilesSortedBySize) {
long sizeInBytes = file.length();
double sizeInMb = (double)sizeInBytes / (1024 * 1024);
totalLength = totalLength + sizeInMb;
if(totalLength <= maxSize) {
filesWithSizeTotalMaxSize.add(file);
} else {
count = count + 1;
filesGroupedByMaxSizeMap.put(count, filesWithSizeTotalMaxSize);
filesWithSizeTotalMaxSize = Lists.newArrayList();
filesWithSizeTotalMaxSize.add(file);
totalLength = sizeInMb;
}
}
filesGroupedByMaxSizeMap.put(count+1, filesWithSizeTotalMaxSize);
return filesGroupedByMaxSizeMap;
after reading,I found the solution using Collectors.groupBy instead.
Code using java8 lambda expression
private final long MB = 1024 * 1024;
private Map<Integer, List<File>> grouping(List<File> files, long maxSize) {
AtomicInteger group = new AtomicInteger(0);
AtomicLong groupSize = new AtomicLong();
return files.stream().collect(groupingBy((file) -> {
if (groupSize.addAndGet(file.length()) <= maxSize * MB) {
return group.get() == 0 ? group.incrementAndGet() : group.get();
}
groupSize.set(file.length());
return group.incrementAndGet();
}));
}
Code provided by #Holger then you are free to checking group whether equals 0
private static final long MB = 1024 * 1024;
private Map<Integer, List<File>> grouping(List<File> files, long maxSize) {
AtomicInteger group = new AtomicInteger(0);
//force initializing group starts with 1 even if the first file is empty.
AtomicLong groupSize = new AtomicLong(maxSize * MB + 1);
return files.stream().collect(groupingBy((file) -> {
if (groupSize.addAndGet(file.length()) <= maxSize * MB) {
return group.get();
}
groupSize.set(file.length());
return group.incrementAndGet();
}));
}
Code using anonymous class
inspired by #Holger, All “solutions” using a grouping function that modifies external state are hacks abusing the API,so you can use anonymous class to manage the grouping logic state in class.
private static final long MB = 1024 * 1024;
private Map<Integer, List<File>> grouping(List<File> files, long maxSize) {
return files.stream().collect(groupingBy(groupSize(maxSize)));
}
private Function<File, Integer> groupSize(final long maxSize) {
long maxBytesSize = maxSize * MB;
return new Function<File, Integer>() {
private int group;
private long groupSize = maxBytesSize + 1;
#Override
public Integer apply(File file) {
return hasRemainingFor(file) ? current(file) : next(file);
}
private boolean hasRemainingFor(File file) {
return (groupSize += file.length()) <= maxBytesSize;
}
private int next(File file) {
groupSize = file.length();
return ++group;
}
private int current(File file) {
return group;
}
};
}
Test
import org.junit.jupiter.api.Test;
import java.io.File;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicLong;
import java.util.function.Function;
import static java.util.Arrays.asList;
import static java.util.Collections.singletonList;
import static java.util.stream.Collectors.groupingBy;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.equalTo;
/**
* Created by holi on 3/24/17.
*/
public class StreamGroupingTest {
private final File FILE_1MB = file(1);
private final File FILE_2MB = file(2);
private final File FILE_3MB = file(3);
#Test
void eachFileInIndividualGroupIfEachFileSizeGreaterThanMaxSize() {
Map<Integer, List<File>> groups = grouping(asList(FILE_2MB, FILE_3MB), 1);
assertThat(groups.size(), equalTo(2));
assertThat(groups.get(1), equalTo(singletonList(FILE_2MB)));
assertThat(groups.get(2), equalTo(singletonList(FILE_3MB)));
}
#Test
void allFilesInAGroupIfTotalSizeOfFilesLessThanOrEqualMaxSize() {
Map<Integer, List<File>> groups = grouping(asList(FILE_2MB, FILE_3MB), 5);
assertThat(groups.size(), equalTo(1));
assertThat(groups.get(1), equalTo(asList(FILE_2MB, FILE_3MB)));
}
#Test
void allNeighboringFilesInAGroupThatTotalOfTheirSizeLessThanOrEqualMaxSize() {
Map<Integer, List<File>> groups = grouping(asList(FILE_1MB, FILE_2MB, FILE_3MB), 3);
assertThat(groups.size(), equalTo(2));
assertThat(groups.get(1), equalTo(asList(FILE_1MB, FILE_2MB)));
assertThat(groups.get(2), equalTo(singletonList(FILE_3MB)));
}
#Test
void eachFileInIndividualGroupIfTheFirstFileAndTotalOfEachNeighboringFilesSizeGreaterThanMaxSize() {
Map<Integer, List<File>> groups = grouping(asList(FILE_2MB, FILE_1MB, FILE_3MB), 2);
assertThat(groups.size(), equalTo(3));
assertThat(groups.get(1), equalTo(singletonList(FILE_2MB)));
assertThat(groups.get(2), equalTo(singletonList(FILE_1MB)));
assertThat(groups.get(3), equalTo(singletonList(FILE_3MB)));
}
#Test
void theFirstEmptyFileInGroup1() throws Throwable {
File emptyFile = file(0);
Map<Integer, List<File>> groups = grouping(singletonList(emptyFile), 2);
assertThat(groups.get(1), equalTo(singletonList(emptyFile)));
}
private static final long MB = 1024 * 1024;
private Map<Integer, List<File>> grouping(List<File> files, long maxSize) {
AtomicInteger group = new AtomicInteger(0);
AtomicLong groupSize = new AtomicLong(maxSize * MB + 1);
return files.stream().collect(groupingBy((file) -> {
if (groupSize.addAndGet(file.length()) <= maxSize * MB) {
return group.get();
}
groupSize.set(file.length());
return group.incrementAndGet();
}));
}
private Function<File, Integer> groupSize(final long maxSize) {
long maxBytesSize = maxSize * MB;
return new Function<File, Integer>() {
private int group;
private long groupSize = maxBytesSize + 1;
#Override
public Integer apply(File file) {
return hasRemainingFor(file) ? current(file) : next(file);
}
private boolean hasRemainingFor(File file) {
return (groupSize += file.length()) <= maxBytesSize;
}
private int next(File file) {
groupSize = file.length();
return ++group;
}
private int current(File file) {
return group;
}
};
}
private File file(int sizeOfMB) {
return new File(String.format("%dMB file", sizeOfMB)) {
#Override
public long length() {
return sizeOfMB * MB;
}
#Override
public boolean equals(Object obj) {
File that = (File) obj;
return length() == that.length();
}
};
}
}
Since the processing of each element highly depends on the previous’ processing, this task is not suitable for streams. You still can achieve it using a custom collector, but the implementation would be much more complicated than the loop solution.
In other words, there is no improvement when you rewrite this as a stream operation. Stay with the loop.
However, there are still some things you can improve.
List<File> allFilesSortedBySize = getListOfFiles();
// get maxSize in bytes ONCE, instead of converting EACH size to MiB
long maxSizeBytes = (long)(maxSize * 1024 * 1024);
// use "diamond operator"
Map<Integer, List<File>> filesGroupedByMaxSizeMap = new HashMap<>();
// start with "create new list" condition to avoid code duplication
long totalLength = maxSizeBytes;
// count is obsolete, the map maintains a size
// the initial "totalLength = maxSizeBytes" forces creating a new list within the loop
List<File> filesWithSizeTotalMaxSize = null;
for(File file: allFilesSortedBySize) {
long length = file.length();
if(maxSizeBytes-totalLength <= length) {
filesWithSizeTotalMaxSize = new ArrayList<>(); // no utility method needed
// store each list immediately, so no action after the loop needed
filesGroupedByMaxSizeMap.put(filesGroupedByMaxSizeMap.size()+1,
filesWithSizeTotalMaxSize);
totalLength = 0;
}
totalLength += length;
filesWithSizeTotalMaxSize.add(file);
}
return filesGroupedByMaxSizeMap;
You may further replace
filesWithSizeTotalMaxSize = new ArrayList<>();
filesGroupedByMaxSizeMap.put(filesGroupedByMaxSizeMap.size()+1,
filesWithSizeTotalMaxSize);
with
filesWithSizeTotalMaxSize = filesGroupedByMaxSizeMap.computeIfAbsent(
filesGroupedByMaxSizeMap.size()+1, x -> new ArrayList<>());
but there might be different opinions whether this is an improvement.
The simplest solution to the problem I could think of is to use an AtomicLong wrapper for the size and a AtomicInteger wrapper for length. These have some useful methods for performing basic arithmetic operations on them which are very useful in this particular case.
List<File> files = getListOfFiles();
AtomicLong length = new AtomicLong();
AtomicInteger index = new AtomicInteger(1);
long maxLength = SOME_ARBITRARY_NUMBER;
Map<Integer, List<File>> collect = files.stream().collect(Collectors.groupingBy(
file -> {
if (length.addAndGet(file.length()) <= maxLength) {
return index.get();
}
length.set(file.length());
return index.incrementAndGet();
}
));
return collect;
Basically what Collectors.groupingBy does the work which you Intended.

Filter index hits by node ids in Neo4j

I have a set of node id's (Set< Long >) and want to restrict or filter the results of an query to only the nodes in this set. Is there a performant way to do this?
Set<Node> query(final GraphDatabaseService graphDb, final Set<Long> nodeSet) {
final Index<Node> searchIndex = graphdb.index().forNodes("search");
final IndexHits<Node> hits = searchIndex.query(new QueryContext("value*"));
// what now to return only index hits that are in the given Set of Node's?
}
Wouldn't be faster the other way round? If you get the nodes from your set and compare the property to the value you are looking for?
for (Iterator it=nodeSet.iterator();it.hasNext();) {
Node n=db.getNodeById(it.next());
if (!n.getProperty("value","").equals("foo")) it.remove();
}
or for your suggestion
Set<Node> query(final GraphDatabaseService graphDb, final Set<Long> nodeSet) {
final Index<Node> searchIndex = graphdb.index().forNodes("search");
final IndexHits<Node> hits = searchIndex.query(new QueryContext("value*"));
Set<Node> result=new HashSet<>();
for (Node n : hits) {
if (nodeSet.contains(n.getId())) result.add(n);
}
return result;
}
So the fastest solution I found was directly using lucenes IndexSearcher on the index created by neo4j and use an custom Filter to restrict the search to specific nodes.
Just open the neo4j index folder "{neo4j-database-folder}/index/lucene/node/{index-name}" with the lucene IndexReader. Make sure to use not add a lucene dependency to your project in another version than the one neo4j uses, which currently is lucene 3.6.2!
here's my lucene Filter implementation that filters all query results by the given Set of document id's. (Lucene Document id's (Integer) ARE NOT Neo4j Node id's (Long)!)
import java.io.IOException;
import java.util.PriorityQueue;
import java.util.Set;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.search.DocIdSet;
import org.apache.lucene.search.DocIdSetIterator;
import org.apache.lucene.search.Filter;
public class DocIdFilter extends Filter {
public class FilteredDocIdSetIterator extends DocIdSetIterator {
private final PriorityQueue<Integer> filterQueue;
private int docId;
public FilteredDocIdSetIterator(final Set<Integer> filterSet) {
this(new PriorityQueue<Integer>(filterSet));
}
public FilteredDocIdSetIterator(final PriorityQueue<Integer> filterQueue) {
this.filterQueue = filterQueue;
}
#Override
public int docID() {
return this.docId;
}
#Override
public int nextDoc() throws IOException {
if (this.filterQueue.isEmpty()) {
this.docId = NO_MORE_DOCS;
} else {
this.docId = this.filterQueue.poll();
}
return this.docId;
}
#Override
public int advance(final int target) throws IOException {
while ((this.docId = this.nextDoc()) < target)
;
return this.docId;
}
}
private final PriorityQueue<Integer> filterQueue;
public DocIdFilter(final Set<Integer> filterSet) {
super();
this.filterQueue = new PriorityQueue<Integer>(filterSet);
}
private static final long serialVersionUID = -865683019349988312L;
#Override
public DocIdSet getDocIdSet(final IndexReader reader) throws IOException {
return new DocIdSet() {
#Override
public DocIdSetIterator iterator() throws IOException {
return new FilteredDocIdSetIterator(DocIdFilter.this.filterQueue);
}
};
}
}
To map the set of neo4j node id's (the query result should be filtered with) to the correct lucene document id's i created an inmemory bidirectional map:
public static HashBiMap<Integer, Long> generateDocIdToNodeIdMap(final IndexReader indexReader)
throws LuceneIndexException {
final HashBiMap<Integer, Long> result = HashBiMap.create(indexReader.numDocs());
for (int i = 0; i < indexReader.maxDoc(); i++) {
if (indexReader.isDeleted(i)) {
continue;
}
final Document doc;
try {
doc = indexReader.document(i, new FieldSelector() {
private static final long serialVersionUID = 5853247619312916012L;
#Override
public FieldSelectorResult accept(final String fieldName) {
if ("_id_".equals(fieldName)) {
return FieldSelectorResult.LOAD_AND_BREAK;
} else {
return FieldSelectorResult.NO_LOAD;
}
}
};
);
} catch (final IOException e) {
throw new LuceneIndexException(indexReader.directory(), "could not read document with ID: '" + i
+ "' from index.", e);
}
final Long nodeId;
try {
nodeId = Long.valueOf(doc.get("_id_"));
} catch (final NumberFormatException e) {
throw new LuceneIndexException(indexReader.directory(),
"could not parse node ID value from document ID: '" + i + "'", e);
}
result.put(i, nodeId);
}
return result;
}
I'm using the Google Guava Library that provides an bidirectional map and the initialization of collections with an specific size.

Resources