What is the Graph class and createExpensiveGraph() method in the Manual Population example of Caffeine Cache:
Cache<Key, Graph> cache = Caffeine.newBuilder()
.expireAfterWrite(10, TimeUnit.MINUTES)
.maximumSize(10_000)
.build();
// Lookup an entry, or null if not found
Graph graph = cache.getIfPresent(key);
// Lookup and compute an entry if absent, or null if not computable
graph = cache.get(key, k -> createExpensiveGraph(key));
// Insert or update an entry
cache.put(key, graph);
// Remove an entry
cache.invalidate(key);
https://github.com/ben-manes/caffeine/wiki/Population
?
Related
I have a method that validates containing some elements in Stream.
Task
For example, there is a sequence that has a series of numbers (the numbers are not repeated), and each number is larger than the other :
1, 20, 35, 39, 45, 43
... It is necessary to check whether there is a specified range in this stream, for example 35 ...49. If there is no such range, then you need to throw an Exception.
But since this is asynchronous processing, the usual methods do not work here. After all, we process elements in the stream and do not know what the next element will be.
The elements of this sequence need to be folded, the addition should be done gradually (as soon as the initial range of elements is found).
During the service, you need to check whether there is an endpoint in the generated sequence and when the entire flow of elements is received, but this point is not, then ask for an Exception, since the upper limit of the specified range is not received
Also, do not start calculations until the starting point of the specified range is found,
while we cannot block the stream, otherwise we will get the same Exstrong textception.
How can such a check be organized ?
When I work with a regular thread, it looks like this:
private boolean isRangeValues() {
List<BigInteger> sequence = Arrays
.asList(
new BigInteger("11"),
new BigInteger("15"),
new BigInteger("23"),
new BigInteger("27"),
new BigInteger("30"));
BigInteger startRange = new BigInteger("15");
BigInteger finishRange = new BigInteger("27");
boolean isStartRangeMember = sequence.contains(startRange);
boolean isFinishRangeMembe = sequence.contains(finishRange);
return isStartRangeMember && isFinishRangeMember;
}
But I have a task to process a stream of elements that are generated at some interval.
To get the result, a reactive stack is used in Spring and I get the result in Flux.
Just convert to a list and process, - it will not work, there will be an Exception.
After filtering these elements, the stream will continue to be processed.
But if I see an error in data validation at the time of filtering (in this case, there are no elements that are needed), then I will need to request an Exception, which will be processed globally and returned to the client.
#GetMapping("v1/sequence/{startRange}/{endRange}")
Mono<BigInteger> getSumSequence(
#PathVariable BigInteger startRange,
#PathVariable BigInteger endRange) {
Flux<BigInteger> sequenceFlux = sequenceGenerated();
validateSequence(sequenceFlux)
return sum(sequenceFlux );
}
private Mono<BigInteger> sum (Flux<BigInteger> sequenceFlux ){
.....
}
private void validateSequence(Flux<BigInteger> sequenceFlux){
... is wrong
throw new RuntimeException();
}
}
I came up with some solution (I published it in this topic).
public void validateRangeSequence(sequenceDto dto) {
Flux<BigInteger> sequenceFlux = dto.getSequenceFlux();
BigInteger startRange = dto.getStartRange();
BigInteger endRange = dto.getEndRange();
Mono<Boolean> isStartRangeMember = sequenceFlux.hasElement(startRange);
Mono<Boolean> isEndRangeMember = sequenceFlux.hasElement(endRange);
if ( !isStartRangeMember.equals(isEndRangeMember) ){
throw new RuntimeException("error");
}
But it doesn't work as expected, even the correct results cause an exception.
Update
public void validateRangeSeq(RangeSequenceDto dto) {
Flux<BigInteger> sequenceFlux = dto.getSequenceFlux();
BigInteger startRange = dto.getStartRange();
BigInteger endRange = dto.getEndRange();
Mono<Boolean> isStartRangeMember = sequenceFlux.hasElement(startRange);
Mono<Boolean> isEndRangeMember = sequenceFlux.hasElement(endRange);
sequenceFlux
.handle((number, sink) -> {
if (!isStartRangeMember.equals(isEndRangeMember) ){
sink.error(new RangeWrongSequenceExc("It is wrong given range!."));
} else {
sink.next(number);
}
});
}
Unfortunately , That decision also doesn't work.
sequenceFlux
.handle(((bigInteger, synchronousSink) -> {
if(!bigInteger.equals(startRange)){
synchronousSink.error(new RuntimeException("!!!!!!!!!!! ---- Wrong range!"));
} else {
synchronousSink.next(bigInteger);
}
}));
It piece of code - It doesn't work. (does not react in any way)
Who thinks what about this ? Should this be done or are there other approaches ?
I am not familiar with Reactive stack in Spring and do not know how to handle such a situation here.
Maybe someone has ideas on how to organize such filtering and do not block the processing of elements in the stream.
You can try to do it like that
Flux<Integer> yourStream = Flux.just(1, 2, 3, 4, 5, 6, 7, 8, 9, 10).share();
Flux.zip(
yourStream.filter(integer -> integer.equals(4)),
yourStream.filter(integer -> integer.equals(6)),
(integer, integer2) -> Tuple2.of(integer, integer2))
.subscribe(System.out::println);
From my previous question: Hibernate: Cannot fetch data back to Map<>, I was getting NullPointerException after I tried to fetch data back. I though the reason was the primary key (when added to Map as put(K,V), the primary key was null, but after JPA persist, it created the primary key and thus changed the HashMap()). I had this equals and hashCode:
User.java:
#Override
public boolean equals(Object o) {
if (this == o) return true;
if (!(o instanceof User)) return false;
User user = (User) o;
return Objects.equals(id, user.id) && Objects.equals(username, user.username) && Objects.equals(about, user.about) && Objects.equals(friendships, user.friendships) && Objects.equals(posts, user.posts);
}
#Override
public int hashCode() {
return Objects.hash(id, username, about, friendships, posts);
}
-> I used all fields in the calculation of hash. That made the NullPointerException BUT not because of id (primary key), but because of collections involved in the hash (friends and posts). So I changed both functions to use only database equality:
#Override
public boolean equals(Object o) {
if (this == o) return true;
if (id == null) return false;
if (!(o instanceof User)) return false;
User user = (User) o;
return this.id.equals(user.getId());
}
#Override
public int hashCode() {
return id == null ? System.identityHashCode(this) :
id.hashCode();
So now only the id field is involved in the hash. Now, it didn't give me NullPointerException for fetched data. I used this code to test it:
(from User.java):
public void addFriend(User friend){
Friendship friendship = new Friendship();
friendship.setOwner(this);
friendship.setFriend(friend);
this.friendships.put(friend, friendship);
}
DemoApplication.java:
#Bean
public CommandLineRunner dataLoader(UserRepository userRepo, FriendshipRepository friendshipRepo){
return new CommandLineRunner() {
#Override
public void run(String... args) throws Exception {
User f1 = new User("friend1");
User f2 = new User("friend2");
User u1 = new User("user1");
System.out.println(f1);
System.out.println(f1.hashCode());
u1.addFriend(f1);
u1.addFriend(f2);
userRepo.save(u1);
User fetchedUser = userRepo.findByUsername("user1");
System.out.println(fetchedUser.getFriendships().get(f1).getFriend());
System.out.println(fetchedUser.getFriendships().get(f1).getFriend().hashCode());
}
};
}
You can see I am
puting the f1 User into friendship of user1 (owner of the friendship). The time when the f1.getId() == null
saving the user1. The time when the f1 id gets assign its primary key value by Hibernate (because the friendship relation is Cascade.All so including the persisting)
Fetching the f1 User back by geting it from the Map, which does the look-up with the hashCode, which is now broken, because the f1.getId() != null.
But even then, I got the right element. The output:
User{id=null, username='friend1', about='null', friendships={}, posts=[]}
-935581894
...
User{id=3, username='friend1', about='null', friendships={}, posts=[]}
3
As you can see: the id is null, then 3 and the hashCode is -935581894, then 3... So how is possible I was able to get the right element?
Not all Map implementation use the hashCode (for example a TreeMap implementation do not use it, and rather uses a Comparator to sort entries into a tree).
So i would first check that hibernate is not replacing the field :
private Map<User, Friendship> friendships = new HashMap<>();
with its own implementation of Map.
Then, even if hibernate keeps the HashMap, and the hashcode of the object changed, you might be lucky and both old and new hashcodes gives the same bucket of the hashmap.
As the object is the same (the hibernate session garantees that), the equals used to find the object in the bucket will work. (if the bucket has more than 8 elements, instead of the bucket being a linked list, it will be a b-tree ordered on hashcode, in that case it won't find your entry, but the map seems to have only 2-3 elements so it can't be the case).
Now I understood your question.
Looking at the Map documentation we read the following:
Note: great care must be exercised if mutable objects are used as map
keys. The behavior of a map is not specified if the value of an object
is changed in a manner that affects equals comparisons while the
object is a key in the map.
It looks like there is no definitive answer for this and as #Thierry already said it seems that you just got lucky. The key takeaway is "do not use mutable objects as Map keys".
Why HashMap merge is doing null check on value. HashMap supports null key and null values.So can some one please tell why null check on merge is required?
#Override
public V merge(K key, V value,
BiFunction<? super V, ? super V, ? extends V> remappingFunction) {
if (value == null)
throw new NullPointerException();
if (remappingFunction == null)
throw new NullPointerException();
Due to this I am unable to use Collectors.toMap(Function.identity(), this::get) to collect values in a Map
The behavior is mandated by the Map.merge contract:
Throws:
…
NullPointerException - if the specified key is null and this map does not support null keys or the value or remappingFunction is null
Note that using Map.merge for Collectors.toMap without a merge function is an implementation detail; it not only disallows null values, it does not provide the desired behavior for reporting duplicate keys, the Java 8 implementation wrongly reports one of the two values as key when there are duplicate keys.
In Java 9, the implementation has been completely rewritten, it does not use Map.merge anymore. But the new implementation is behavioral compatible, now having code explicitly throwing when the value is null. So the behavior of Collectors.toMap not accepting null values has been fixed in the code and is not an artifact of using Map.merge anymore. (Still speaking of the toMap collector without a merge function only.)
Unfortunately, the documentation does not tell.
Because internally for Collectors.toMap, Map#merge is used - you can't really do anything about it. Using the static Collectors.toMap is not an option (which by the way is documented to throw a NullPointerException).
But spinning a custom collector to be able to do what you want (which you have not shown) is not that complicated, here is an example:
Map<Integer, Integer> result = Arrays.asList(null, 1, 2, 3)
.stream()
.collect(
HashMap::new,
(map, i) -> {
map.put(i, i);
},
HashMap::putAll);
As a workaround for mentioned problems with null values in toMap and merge
you can try to use a custom collector in the following manner:
public static <T, R> Map<T, R> mergeTwoMaps(final Map<T, R> map1,
final Map<T, R> map2,
final BinaryOperator<R> mergeFunction) {
return Stream.of(map1, map2).flatMap(map -> map.entrySet().stream())
.collect(HashMap::new,
(accumulator, entry) -> {
R value = accumulator.containsKey(entry.getKey())
? mergeFunction.apply(accumulator.get(entry.getKey()), entry.getValue())
: entry.getValue();
accumulator.put(entry.getKey(), value);
},
HashMap::putAll);
}
NiFi 1.1.1
I am trying to persist a byte [] using the State Manager.
private byte[] lsnUsedDuringLastLoad;
#Override
public void onTrigger(final ProcessContext context,
final ProcessSession session) throws ProcessException {
...
...
...
final StateManager stateManager = context.getStateManager();
try {
StateMap stateMap = stateManager.getState(Scope.CLUSTER);
final Map<String, String> newStateMapProperties = new HashMap<>();
newStateMapProperties.put(ProcessorConstants.LAST_MAX_LSN,
new String(lsnUsedDuringLastLoad));
logger.debug("Persisting stateMap : "
+ newStateMapProperties);
stateManager.replace(stateMap, newStateMapProperties,
Scope.CLUSTER);
} catch (IOException ioException) {
logger.error("Error while persisting the state to NiFi",
ioException);
throw new ProcessException(
"The state(LSN) couldn't be persisted", ioException);
}
...
...
...
}
I don't get any exception or even a log error entry, the processor continues to run.
The following load code always returns a null value(Retrieved the statemap : {})for the persisted field :
try {
stateMap = stateManager.getState(Scope.CLUSTER);
stateMapProperties = new HashMap<>(stateMap.toMap());
logger.debug("Retrieved the statemap : "+stateMapProperties);
lastMaxLSN = (stateMapProperties
.get(ProcessorConstants.LAST_MAX_LSN) == null || stateMapProperties
.get(ProcessorConstants.LAST_MAX_LSN).isEmpty()) ? null
: stateMapProperties.get(
ProcessorConstants.LAST_MAX_LSN).getBytes();
logger.debug("Attempted to load the previous lsn from NiFi state : "
+ lastMaxLSN);
} catch (IOException ioe) {
logger.error("Couldn't load the state map", ioe);
throw new ProcessException(ioe);
}
I am wondering if the ZK is at fault or have I missed something while using the State Map !
The docs for replace say:
"Updates the value of the component's state to the new value if and only if the value currently is the same as the given oldValue."
https://github.com/apache/nifi/blob/master/nifi-api/src/main/java/org/apache/nifi/components/state/StateManager.java#L79-L92
I would suggest something like this:
if (stateMap.getVersion() == -1) {
stateManager.setState(stateMapProperties, Scope.CLUSTER);
} else {
stateManager.replace(stateMap, stateMapProperties, Scope.CLUSTER);
}
The first time through when you retrieve the state, the version should be -1 since nothing was ever stored before, and in that case you use setState, but then all the times after that you can use replace.
The idea behind replace() and the return value is, to be able to react on conflicts. Another task on the same or on another node (in a cluster) might have changed the state in the meantime. When replace() returns false, you can react to the conflict, sort out, what can be sorted out automatically and inform the user when it can not be sorted out.
This is the code I use:
/**
* Set or replace key-value pair in status cluster wide. In case of a conflict, it will retry to set the state, when the given
* key does not yet exist in the map. If the key exists and the value is equal to the given value, it does nothing. Otherwise
* it fails and returns false.
*
* #param stateManager that controls state cluster wide.
* #param key of key-value pair to be put in state map.
* #param value of key-value pair to be put in state map.
* #return true, if state map contains the key with a value equal to the given value, probably set by this function.
* False, if a conflict occurred and key-value pair is different.
* #throws IOException if the underlying state mechanism throws exception.
*/
private boolean setState(StateManager stateManager, String key, String value) throws IOException {
boolean somebodyElseUpdatedWithoutConflict = false;
do {
StateMap stateMap = stateManager.getState(Scope.CLUSTER);
// While the next two lines run, another thread might change the state.
Map<String,String> map = new HashMap<String, String>(stateMap.toMap()); // Make mutable
String oldValue = map.put(key, value);
if(!stateManager.replace(stateMap, map, Scope.CLUSTER)) {
// Conflict happened. Sort out action to take
if(oldValue == null)
somebodyElseUpdatedWithoutConflict = true; // Different key was changed. Retry
else if(oldValue.equals(value))
break; // Lazy case. Value already set
else
return false; // Unsolvable conflict
}
} while(somebodyElseUpdatedWithoutConflict);
return true;
}
You can replace the part after // Conflict happened... with whatever conflict resolution you need.
I thought I was being clever I implemented a sorter which does not use a comparing function that simply recalculates the sort-score of the compared elements each iteration, but rather calculates the scores (I call them keys) once and caches them. For me that seemed to be contraire to what the dart default implementation (or for that matter, also the Java implementation) does.
Anyway, here is my implementation:
class KeySorter<V, K extends Comparable> {
List<V> list;
KeySorter(this.list);
List<V> sort(K keyFn(V)) {
Map<V, K> keys = {};
list.sort((e1, e2) {
var e1Key = keys.putIfAbsent(e1, () => keyFn(e1)),
e2Key = keys.putIfAbsent(e2, () => keyFn(e2));
return e1Key.compareTo(e2Key);
});
return list;
}
}
And that's the benchmark:
https://gist.github.com/Gregoor/547c0451c4fa527dd85c
The default implementation beats mine by a factor of 4. How comes?
As mentioned in the comments, caching only makes sense if it takes less time to create and look up the cache than to calculate the result from scratch.
Your implementation also artificially slows down the cache look-up by using putIfAbsent unnecessarily. Replacing this with an initial cache population followed by a direct key lookup reduces the performance difference to only a factor of 2:
List<V> sort(K keyFn(V)) {
Map<V, K> keys = {};
list.forEach((e) => keys[e] = keyFn(e));
list.sort((e1, e2) {
return keys[e1].compareTo(keys[e2]);
});
return list;
}