locking on a cache key - caching

I've read several questions similar to this, but none of the answers provide ideas of how to clean up memory while still maintaining lock integrity. I'm estimating the number of key-value pairs at a given time to be in the tens of thousands, but the number of key-value pairs over the lifespan of the data structure is virtually infinite (realistically it probably wouldn't be more than a billion, but I'm coding to the worst case).
I have an interface:
public interface KeyLock<K extends Comparable<? super K>> {
public void lock(K key);
public void unock(K key);
}
with a default implementation:
public class DefaultKeyLock<K extends Comparable<? super K>> implements KeyLock<K> {
private final ConcurrentMap<K, Mutex> lockMap;
public DefaultKeyLock() {
lockMap = new ConcurrentSkipListMap<K, Mutex>();
}
#Override
public void lock(K key) {
Mutex mutex = new Mutex();
Mutex existingMutex = lockMap.putIfAbsent(key, mutex);
if (existingMutex != null) {
mutex = existingMutex;
}
mutex.lock();
}
#Override
public void unock(K key) {
Mutex mutex = lockMap.get(key);
mutex.unlock();
}
}
This works nicely, but the map never gets cleaned up. What I have so far for a clean implementation is:
public class CleanKeyLock<K extends Comparable<? super K>> implements KeyLock<K> {
private final ConcurrentMap<K, LockWrapper> lockMap;
public CleanKeyLock() {
lockMap = new ConcurrentSkipListMap<K, LockWrapper>();
}
#Override
public void lock(K key) {
LockWrapper wrapper = new LockWrapper(key);
wrapper.addReference();
LockWrapper existingWrapper = lockMap.putIfAbsent(key, wrapper);
if (existingWrapper != null) {
wrapper = existingWrapper;
wrapper.addReference();
}
wrapper.addReference();
wrapper.lock();
}
#Override
public void unock(K key) {
LockWrapper wrapper = lockMap.get(key);
if (wrapper != null) {
wrapper.unlock();
wrapper.removeReference();
}
}
private class LockWrapper {
private final K key;
private final ReentrantLock lock;
private int referenceCount;
public LockWrapper(K key) {
this.key = key;
lock = new ReentrantLock();
referenceCount = 0;
}
public synchronized void addReference() {
lockMap.put(key, this);
referenceCount++;
}
public synchronized void removeReference() {
referenceCount--;
if (referenceCount == 0) {
lockMap.remove(key);
}
}
public void lock() {
lock.lock();
}
public void unlock() {
lock.unlock();
}
}
}
This works for two threads accessing a single key lock, but once a third thread is introduced the lock integrity is no longer guaranteed. Any ideas?

I don't buy that this works for two threads. Consider this:
(Thread A) calls lock(x), now holds lock x
thread switch
(Thread B) calls lock(x), putIfAbsent() returns the current wrapper for x
thread switch
(Thread A) calls unlock(x), the wrapper reference count hits 0 and it gets removed from the map
(Thread A) calls lock(x), putIfAbsent() inserts a new wrapper for x
(Thread A) locks on the new wrapper
thread switch
(Thread B) locks on the old wrapper
How about:
LockWrapper starts with a reference count of 1
addReference() returns false if the reference count is 0
in lock(), if existingWrapper != null, we call addReference() on it. If this returns false, it has already been removed from the map, so we loop back and try again from the putIfAbsent()

I would use a fixed array by default for a striped lock, since you can size it to the concurrency level that you expect. While there may be hash collisions, a good spreader will resolve that. If the locks are used for short critical sections, then you may be creating contention in the ConcurrentHashMap that defeats the optimization.
You're welcome to adapt my implementation, though I only implemented the dynamic version for fun. It didn't seem useful in practice so only the fixed was used in production. You can use the hash() function from ConcurrentHashMap to provide a good spreading.
ReentrantStripedLock in,
http://code.google.com/p/concurrentlinkedhashmap/wiki/IndexableCache

Related

Vert.x aerospike client to use context-bound event loop

Standard java aerospike client's methods have overloads allowing to provide EventLoop as an argument. When running in vert.x that client is not aware of context-bounded event loop and just fallbacks to if (eventLoop == null) { eventLoop = cluster.eventLoops.next(); } which could(and likely does) causes context switching/level of concurrency which in turn affects performance (it's still in theory, but I want to prove it), because there is no guarantee that aerospike requests will run on the same event loop as coming http request according to Vert.x Multi Reactor pattern. Open source aerospike clients like vertx-aerospike-client also have such a disadvantage. Using vert.x there is no way(at least I'm not aware of) to retrieve context-bounded event loop and pass it to aerospike client.
Vert.x has method to retrieve Context Vertx.currentContext() but retrieving EventLoop is not available.
Any ideas?
Finally I've built this:
public class ContextEventLoop {
private final NettyEventLoops eventLoops;
public ContextEventLoop(final NettyEventLoops eventLoops) {
this.eventLoops = Objects.requireNonNull(eventLoops, "eventLoops");
}
public EventLoop resolve() {
final ContextInternal ctx = ContextInternal.current();
final EventLoop eventLoop;
if (ctx != null
&& ctx.isEventLoopContext()
&& (eventLoop = eventLoops.get(ctx.nettyEventLoop())) != null) {
return eventLoop;
}
return eventLoops.next();
}
}
#NotNull
public EventLoops wrap(final EventLoops fallback,
final Supplier<#NotNull EventLoop> next) {
return new EventLoops() {
#Override
public EventLoop[] getArray() {
return fallback.getArray();
}
#Override
public int getSize() {
return fallback.getSize();
}
#Override
public EventLoop get(int index) {
return fallback.get(index);
}
#Override
public EventLoop next() {
return next.get();
}
#Override
public void close() {
fallback.close();
}
};
}

How does ForkJoinPool#awaitQuiescence actually work?

I have next implementation of RecursiveAction, single purpose of this class - is to print from 0 to 9, but from different threads, if possible:
public class MyRecursiveAction extends RecursiveAction {
private final int num;
public MyRecursiveAction(int num) {
this.num = num;
}
#Override
protected void compute() {
if (num < 10) {
System.out.println(num);
new MyRecursiveAction(num + 1).fork();
}
}
}
And I thought that invoking awaitQuiescence will make current thread to wait until all tasks (submitted and forked) will be completed:
public class Main {
public static void main(String[] args) {
ForkJoinPool forkJoinPool = new ForkJoinPool();
forkJoinPool.execute(new MyRecursiveAction(0));
System.out.println(forkJoinPool.awaitQuiescence(5, TimeUnit.SECONDS) ? "tasks" : "time");
}
}
But I don't always get correct result, instead of printing 10 times, prints from 0 to 10 times.
But if I add helpQuiesce to my implementation of RecursiveAction:
public class MyRecursiveAction extends RecursiveAction {
private final int num;
public MyRecursiveAction(int num) {
this.num = num;
}
#Override
protected void compute() {
if (num < 10) {
System.out.println(num);
new MyRecursiveAction(num + 1).fork();
}
RecursiveAction.helpQuiesce();//here
}
}
Everything works fine.
I want to know for what actually awaitQuiescence waiting?
You get an idea of what happens when you change the System.out.println(num); to System.out.println(num + " " + Thread.currentThread());
This may print something like:
0 Thread[ForkJoinPool-1-worker-3,5,main]
1 Thread[main,5,main]
tasks
2 Thread[ForkJoinPool.commonPool-worker-3,5,main]
When awaitQuiescence detects that there are pending tasks, it helps out by stealing one and executing it directly. Its documentation says:
If called by a ForkJoinTask operating in this pool, equivalent in effect to ForkJoinTask.helpQuiesce(). Otherwise, waits and/or attempts to assist performing tasks until this pool isQuiescent() or the indicated timeout elapses.
Emphasis added by me
This happens here, as we can see, a task prints “main” as its executing thread. Then, the behavior of fork() is specified as:
Arranges to asynchronously execute this task in the pool the current task is running in, if applicable, or using the ForkJoinPool.commonPool() if not inForkJoinPool().
Since the main thread is not a worker thread of a ForkJoinPool, the fork() will submit the new task to the commonPool(). From that point on, the fork() invoked from a common pool’s worker thread will submit the next task to the common pool too. But awaitQuiescence invoked on the custom pool doesn’t wait for the completion of the common pool’s tasks and the JVM terminates too early.
If you’re going to say that this is a flawed API design, I wouldn’t object.
The solution is not to use awaitQuiescence for anything but the common pool¹. Normally, a RecursiveAction that splits off sub tasks should wait for their completion. Then, you can wait for the root task’s completion to wait for the completion of all associated tasks.
The second half of this answer contains an example of such a RecursiveAction implementation.
¹ awaitQuiescence is useful when you don’t have hands on the actual futures, like with a parallel stream that submits to the common pool.
Everything works fine.
No it does not, you got lucky that it worked when you inserted:
RecursiveAction.helpQuiesce();
To explain this let's slightly change your example a bit:
static class MyRecursiveAction extends RecursiveAction {
private final int num;
public MyRecursiveAction(int num) {
this.num = num;
}
#Override
protected void compute() {
if (num < 10) {
System.out.println(num);
new MyRecursiveAction(num + 1).fork();
}
}
}
public static void main(String[] args) {
ForkJoinPool forkJoinPool = new ForkJoinPool();
forkJoinPool.execute(new MyRecursiveAction(0));
LockSupport.parkNanos(TimeUnit.SECONDS.toNanos(2));
}
If you run this, you will notice that you get the result you expect to get. And there are two main reasons for this. First, fork method will execute the task in the common pool as the other answer already explained. And second, is that threads in the common pool are daemon threads. JVM is not waiting for them to finish before exiting, it exists early. So if that is the case, you might ask why it works. It does because of this line:
LockSupport.parkNanos(TimeUnit.SECONDS.toNanos(2));
which makes the main thread (which is a non daemon thread) sleep for two seconds, giving enough time for the ForkJoinPool to execute your task.
Now let's change the code closer to your example:
public static void main(String[] args) {
ForkJoinPool forkJoinPool = new ForkJoinPool();
forkJoinPool.execute(new MyRecursiveAction(0));
System.out.println(forkJoinPool.awaitQuiescence(5, TimeUnit.SECONDS) ? "tasks" : "time");
}
specifically, you use: forkJoinPool.awaitQuiescence(...), which is documented as:
Otherwise, waits and/or attempts to assist performing tasks...
It does not say that it will necessarily wait, it says it will "wait and/or attempt ...", in this case it is more or, than and. As such, it will attempt to help, but still it will not wait for all the tasks to finish. Is this weird or even stupid?
When you insert RecursiveAction.helpQuiesce(); you are eventually calling the same awaitQuiescence (with different arguments) under the hood - so essentially nothing changes; the fundamental problem is still there:
static ForkJoinPool forkJoinPool = new ForkJoinPool();
static AtomicInteger res = new AtomicInteger(0);
public static void main(String[] args) {
forkJoinPool.execute(new MyRecursiveAction(0));
System.out.println(forkJoinPool.awaitQuiescence(5, TimeUnit.SECONDS) ? "tasks" : "time");
System.out.println(res.get());
}
static class MyRecursiveAction extends RecursiveAction {
private final int num;
public MyRecursiveAction(int num) {
this.num = num;
}
#Override
protected void compute() {
if (num < 10_000) {
res.incrementAndGet();
System.out.println(num + " thread : " + Thread.currentThread().getName());
new MyRecursiveAction(num + 1).fork();
}
RecursiveAction.helpQuiesce();
}
}
When I run this, it never printed 10000, showing that the insertions of that line changes nothing.
The usual default way to handle such things is to fork then join. And one more join in the caller, on the ForkJoinTask that you get back when calling submit. Something like:
public static void main(String[] args) {
ForkJoinPool forkJoinPool = new ForkJoinPool(2);
ForkJoinTask<Void> task = forkJoinPool.submit(new MyRecursiveAction(0));
task.join();
}
static class MyRecursiveAction extends RecursiveAction {
private final int num;
public MyRecursiveAction(int num) {
this.num = num;
}
#Override
protected void compute() {
if (num < 10) {
System.out.println(num);
MyRecursiveAction ac = new MyRecursiveAction(num + 1);
ac.fork();
ac.join();
}
}
}

java application crashed by suspicious jdbc memory leak

I have been working on a java application which crawls page from Internet with http-client(version4.3.3). It uses one fixedThreadPool with 5 threads,each is a loop thread .The pseudocode is following.
public class Spiderling extends Runnable{
#Override
public void run() {
while (true) {
T task = null;
try {
task = scheduler.poll();
if (task != null) {
if Ehcache contains task's config
taskConfig = Ehcache.getConfig;
else{
taskConfig = Query task config from db;//close the conn every time
put taskConfig into Ehcache
}
spider(task,taskConfig);
}
} catch (Exception e) {
e.printStackTrace();
}
}
LOG.error("spiderling is DEAD");
}
}
I am running it with following arguments -Duser.timezone=GMT+8 -server -Xms1536m -Xmx1536m -Xloggc:/home/datalord/logs/gc-2016-07-23-10-28-24.log -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintHeapAtGC on a server(2 cpus,2G memory) and it crashes pretty regular about once in two or three days with no OutOfMemoryError and no JVM error log.
Here is my analysis;
I analyse the gc log with GC-EASY,the report is here. The weird thing is the Old Gen increasing slowly until the allocated max heap size,but the Full Gc has never happened even once.
I suspect it might has memory leak,so I dump the heap map using cmd jmap -dump:format=b,file=soldier.bin and using the Eclipse MAT to analyze the dump file.Here is the problem suspect which object occupies 280+ M bytes.
The class "com.mysql.jdbc.NonRegisteringDriver",
loaded by "sun.misc.Launcher$AppClassLoader # 0xa0018490", occupies 281,118,144
(68.91%) bytes. The memory is accumulated in one instance of
"java.util.concurrent.ConcurrentHashMap$Segment[]" loaded by "".
Keywords
com.mysql.jdbc.NonRegisteringDriver
java.util.concurrent.ConcurrentHashMap$Segment[]
sun.misc.Launcher$AppClassLoader # 0xa0018490.
I use c3p0-0.9.1.2 as mysql connection pool and mysql-connector-java-5.1.34 as jdbc connector and Ehcache-2.6.10 as memory cache.I have see all posts about 'com.mysql.jdbc.NonregisteringDriver memory leak' and still get no clue.
This problem has driven me crazy for several days, any advice or help will be appreciated!
**********************Supplementary description on 07-24****************
I use a JAVA WEB + ORM Framework called JFinal(github.com/jfinal/jfinal) which is open in github。
Here are some core code for further description about the problem.
/**
* CacheKit. Useful tool box for EhCache.
*
*/
public class CacheKit {
private static CacheManager cacheManager;
private static final Logger log = Logger.getLogger(CacheKit.class);
static void init(CacheManager cacheManager) {
CacheKit.cacheManager = cacheManager;
}
public static CacheManager getCacheManager() {
return cacheManager;
}
static Cache getOrAddCache(String cacheName) {
Cache cache = cacheManager.getCache(cacheName);
if (cache == null) {
synchronized(cacheManager) {
cache = cacheManager.getCache(cacheName);
if (cache == null) {
log.warn("Could not find cache config [" + cacheName + "], using default.");
cacheManager.addCacheIfAbsent(cacheName);
cache = cacheManager.getCache(cacheName);
log.debug("Cache [" + cacheName + "] started.");
}
}
}
return cache;
}
public static void put(String cacheName, Object key, Object value) {
getOrAddCache(cacheName).put(new Element(key, value));
}
#SuppressWarnings("unchecked")
public static <T> T get(String cacheName, Object key) {
Element element = getOrAddCache(cacheName).get(key);
return element != null ? (T)element.getObjectValue() : null;
}
#SuppressWarnings("rawtypes")
public static List getKeys(String cacheName) {
return getOrAddCache(cacheName).getKeys();
}
public static void remove(String cacheName, Object key) {
getOrAddCache(cacheName).remove(key);
}
public static void removeAll(String cacheName) {
getOrAddCache(cacheName).removeAll();
}
#SuppressWarnings("unchecked")
public static <T> T get(String cacheName, Object key, IDataLoader dataLoader) {
Object data = get(cacheName, key);
if (data == null) {
data = dataLoader.load();
put(cacheName, key, data);
}
return (T)data;
}
#SuppressWarnings("unchecked")
public static <T> T get(String cacheName, Object key, Class<? extends IDataLoader> dataLoaderClass) {
Object data = get(cacheName, key);
if (data == null) {
try {
IDataLoader dataLoader = dataLoaderClass.newInstance();
data = dataLoader.load();
put(cacheName, key, data);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
return (T)data;
}
}
I use CacheKit like CacheKit.get("cfg_extract_rule_tree", extractRootId, new ExtractRuleTreeDataloader(extractRootId)). and class ExtractRuleTreeDataloader will be called if find nothing in cache by extractRootId.
public class ExtractRuleTreeDataloader implements IDataLoader {
public static final Logger LOG = LoggerFactory.getLogger(ExtractRuleTreeDataloader.class);
private int ruleTreeId;
public ExtractRuleTreeDataloader(int ruleTreeId) {
super();
this.ruleTreeId = ruleTreeId;
}
#Override
public Object load() {
List<Record> ruleTreeList = Db.find("SELECT * FROM cfg_extract_fule WHERE root_id=?", ruleTreeId);
TreeHelper<ExtractRuleNode> treeHelper = ExtractUtil.batchRecordConvertTree(ruleTreeList);//convert List<Record> to and tree
if (treeHelper.isValidTree()) {
return treeHelper.getRoot();
} else {
LOG.warn("rule tree id :{} is an error tree #end#", ruleTreeId);
return null;
}
}
As I said before, I use JFinal ORM.The Db.find method code is
public List<Record> find(String sql, Object... paras) {
Connection conn = null;
try {
conn = config.getConnection();
return find(config, conn, sql, paras);
} catch (Exception e) {
throw new ActiveRecordException(e);
} finally {
config.close(conn);
}
}
and the config close method code is
public final void close(Connection conn) {
if (threadLocal.get() == null) // in transaction if conn in threadlocal
if (conn != null)
try {conn.close();} catch (SQLException e) {throw new ActiveRecordException(e);}
}
There is no transaction in my code,so I am pretty sure the conn.close() will be called every time.
**********************more description on 07-28****************
First, I use Ehcache to store the taskConfigs in the memory. And the taskConfigs almost never change, so I want store them in the memory eternally and store them to disk if the memory can not store them all.
I use MAT to find out the GC Roots of NonRegisteringDriver, and the result is show in the following picture.
The Gc Roots of NonRegisteringDriver
But I still don't understand why the default behavior of Ehcache lead memory leak.The taskConfig is a class extends the Model class.
public class TaskConfig extends Model<TaskConfig> {
private static final long serialVersionUID = 5000070716569861947L;
public static TaskConfig DAO = new TaskConfig();
}
and the source code of Model is in this page(github.com/jfinal/jfinal/blob/jfinal-2.0/src/com/jfinal/plugin/activerecord/Model.java). And I can't find any reference (either directly or indirectly) to the connection object as #Jeremiah guessing.
Then I read the source code of NonRegisteringDriver, and don't understand why the map field connectionPhantomRefs of NonRegisteringDriver holds more than 5000 entrys of <ConnectionPhantomReference, ConnectionPhantomReference>,but find no ConnectionImpl in the queue field refQueue of NonRegisteringDriver. Because I see the cleanup code in class AbandonedConnectionCleanupThread which means it will move the ref in the NonRegisteringDriver.connectionPhantomRefs while getting abandoned connection ref from NonRegisteringDriver.refQueue.
#Override
public void run() {
threadRef = this;
while (running) {
try {
Reference<? extends ConnectionImpl> ref = NonRegisteringDriver.refQueue.remove(100);
if (ref != null) {
try {
((ConnectionPhantomReference) ref).cleanup();
} finally {
NonRegisteringDriver.connectionPhantomRefs.remove(ref);
}
}
} catch (Exception ex) {
// no where to really log this if we're static
}
}
}
Appreciate the help offered by #Jeremiah !
From the comments above I'm almost certain your memory leak is actually memory usage from EhCache. The ConcurrentHashMap you're seeing is the one backing the MemoryStore, and I'm guessing that the taskConfig holds a reference (either directly or indirectly) to the connection object, which is why it's showing in your stack.
Having eternal="true" in the default cache makes it so the inserted objects are never allowed to expire. Even without that, the timeToLive and timeToIdle values default to an infinite lifetime!
Combine that with the default behavior of Ehcache when retrieving elements is to copy them (last I checked), through serialization! You're just stacking new Object references up each time the taskConfig is extracted and put back into the ehcache.
The best way to test this (in my opinion) is to change your default cache configuration. Change eternal to false, and implement a timeToIdle value. timeToIdle is a time (in seconds) that a value may exist in the cache without being accessed.
<ehcache> <diskStore path="java.io.tmpdir"/> <defaultCache maxElementsInMemory="10000" eternal="false" timeToIdle="120" overflowToDisk="true" diskPersistent="false" diskExpiryThreadIntervalSeconds="120"/>
If that works, then you may want to look into further tweaking your ehcache configuration settings, or providing a more customized cache reference other than default for your class.
There are multiple performance considerations when tweaking the ehcache. I'm sure that there is a better configuration for your business model. The Ehcache documentation is good, but I found the site to be a bit scattered when I was trying to figure it out. I've listed some links that I found useful below.
http://www.ehcache.org/documentation/2.8/configuration/cache-size.html
http://www.ehcache.org/documentation/2.8/configuration/configuration.html
http://www.ehcache.org/documentation/2.8/apis/cache-eviction-algorithms.html#provided-memorystore-eviction-algorithms
Good luck!
To test your memory leak try the following:
Insert a TaskConfig into ehcache
Immediately retrieve it back out of the cache.
output the value of TaskConfig1.equals(TaskConfig2);
If it returns false, that is your memory leak. Override equals and
hash in your TaskConfig Object and rerun the test.
The root cause of the java program is that the Linux OS runs out of memory and the OOM Killer kills the progresses.
I found the log in /var/log/messages like following.
Aug 3 07:24:03 iZ233tupyzzZ kernel: Out of memory: Kill process 17308 (java) score 890 or sacrifice child
Aug 3 07:24:03 iZ233tupyzzZ kernel: Killed process 17308, UID 0, (java) total-vm:2925160kB, anon-rss:1764648kB, file-rss:248kB
Aug 3 07:24:03 iZ233tupyzzZ kernel: Thread (pooled) invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0
Aug 3 07:24:03 iZ233tupyzzZ kernel: Thread (pooled) cpuset=/ mems_allowed=0
Aug 3 07:24:03 iZ233tupyzzZ kernel: Pid: 6721, comm: Thread (pooled) Not tainted 2.6.32-431.23.3.el6.x86_64 #1
I also find the default value of maxIdleTime is 20 seconds in the C3p0Plugin which is a c3p0 plugin in JFinal, So I think this is why the Object NonRegisteringDriver occupies 280+ M bytes that shown in the MAT report. So I set the maxIdleTime to 3600 seconds and the object NonRegisteringDriver is no longer suspicious in the MAT report.
And I reset the jvm argements to -Xms512m -Xmx512m. And the java program already has been running pretty well for several days. The Full Gc will be called as expected when the Old Gen is full.

How do streams stop?

I was wondering when I created my own infinite stream with Stream.generate how the Streams which are in the standard library stop...
For example when you have a list with records:
List<Record> records = getListWithRecords();
records.stream().forEach(/* do something */);
The stream won't be infinite and running forever, but it will stop when all items in the list are traversed. But how does that work? The same functionality applies for the stream created by Files.lines(path) (source: http://www.mkyong.com/java8/java-8-stream-read-a-file-line-by-line/).
And a second question, how can a stream created with Stream.generate be stopped in the same manner then?
Finite streams simply aren’t created via Stream.generate.
The standard way of implementing a stream, is to implement a Spliterator, sometimes using the Iterator detour. In either case, the implementation has a way to report an end, e.g. when Spliterator.tryAdvance returns false or its forEachRemaining method just returns, or in case of an Iterator source, when hasNext() returns false.
A Spliterator may even report the expected number of elements before the processing begins.
Streams, created via one of the factory methods inside the Stream interface, like Stream.generate may be implemented either, by a Spliterator as well or using internal features of the stream implementation, but regardless of how they are implemented, you don’t get hands on this implementation to change their behavior, so the only way to make such a stream finite, is to chain a limit operation to the stream.
If you want to create a non-empty finite stream that is not backed by an array or collection and none of the existing stream sources fits, you have to implement your own Spliterator and create a stream out of it. As told above, you can use an existing method to create a Spliterator out of an Iterator, but you should resists the temptation to use an Iterator just because it’s familiar. A Spliterator is not hard to implement:
/** like {#code Stream.generate}, but with an intrinsic limit */
static <T> Stream<T> generate(Supplier<T> s, long count) {
return StreamSupport.stream(
new Spliterators.AbstractSpliterator<T>(count, Spliterator.SIZED) {
long remaining=count;
public boolean tryAdvance(Consumer<? super T> action) {
if(remaining<=0) return false;
remaining--;
action.accept(s.get());
return true;
}
}, false);
}
From this starting point, you can add overrides for the default methods of the Spliterator interface, weighting development expense and potential performance improvements, e.g.
static <T> Stream<T> generate(Supplier<T> s, long count) {
return StreamSupport.stream(
new Spliterators.AbstractSpliterator<T>(count, Spliterator.SIZED) {
long remaining=count;
public boolean tryAdvance(Consumer<? super T> action) {
if(remaining<=0) return false;
remaining--;
action.accept(s.get());
return true;
}
/** May improve the performance of most non-short-circuiting operations */
#Override
public void forEachRemaining(Consumer<? super T> action) {
long toGo=remaining;
remaining=0;
for(; toGo>0; toGo--) action.accept(s.get());
}
}, false);
}
I have created a generic workaround for this
public class GuardedSpliterator<T> implements Spliterator<T> {
final Supplier<? extends T> generator;
final Predicate<T> termination;
final boolean inclusive;
public GuardedSpliterator(Supplier<? extends T> generator, Predicate<T> termination, boolean inclusive) {
this.generator = generator;
this.termination = termination;
this.inclusive = inclusive;
}
#Override
public boolean tryAdvance(Consumer<? super T> action) {
T next = generator.get();
boolean end = termination.test(next);
if (inclusive || !end) {
action.accept(next);
}
return !end;
}
#Override
public Spliterator<T> trySplit() {
throw new UnsupportedOperationException("Not supported yet.");
}
#Override
public long estimateSize() {
throw new UnsupportedOperationException("Not supported yet.");
}
#Override
public int characteristics() {
return Spliterator.ORDERED;
}
}
Usage is pretty easy:
GuardedSpliterator<Integer> source = new GuardedSpliterator<>(
() -> rnd.nextInt(),
(i) -> i > 10,
true
);
Stream<Integer> ints = StreamSupport.stream(source, false);
ints.forEach(i -> System.out.println(i));

DeferredResult in spring mvc

I have one class that extends DeferredResults and extends Runnable as shown below
public class EventDeferredObject<T> extends DeferredResult<Boolean> implements Runnable {
private Long customerId;
private String email;
#Override
public void run() {
RestTemplate restTemplate=new RestTemplate();
EmailMessageDTO emailMessageDTO=new EmailMessageDTO("dineshshe#gmail.com", "Hi There");
Boolean result=restTemplate.postForObject("http://localhost:9080/asycn/sendEmail", emailMessageDTO, Boolean.class);
this.setResult(result);
}
//Constructor and getter and setters
}
Now I have controller that return the object of the above class,whenever new request comes to controller we check if that request is present in HashMap(That stores unprocessed request at that instance).If not present then we are creating object of EventDeferredObject class can store that in HashMap and call start() method on it.If this type request is already present then we will return that from HashMap.On completion on request we will delete that request from HashMap.
#RequestMapping(value="/sendVerificationDetails")
public class SendVerificationDetailsController {
private ConcurrentMap<String , EventDeferredObject<Boolean>> requestMap=new ConcurrentHashMap<String , EventDeferredObject<Boolean>>();
#RequestMapping(value="/sendEmail",method=RequestMethod.POST)
public EventDeferredObject<Boolean> sendEmail(#RequestBody EmailDTO emailDTO)
{
EventDeferredObject<Boolean> eventDeferredObject = null;
System.out.println("Size:"+requestMap.size());
if(!requestMap.containsKey(emailDTO.getEmail()))
{
eventDeferredObject=new EventDeferredObject<Boolean>(emailDTO.getCustomerId(), emailDTO.getEmail());
requestMap.put(emailDTO.getEmail(), eventDeferredObject);
Thread t1=new Thread(eventDeferredObject);
t1.start();
}
else
{
eventDeferredObject=requestMap.get(emailDTO.getEmail());
}
eventDeferredObject.onCompletion(new Runnable() {
#Override
public void run() {
if(requestMap.containsKey(emailDTO.getEmail()))
{
requestMap.remove(emailDTO.getEmail());
}
}
});
return eventDeferredObject;
}
}
Now this code works fine if there no identical request comes to that stored in HashMap. If we give number of different request at same time code works fine.
Well, I do not know if I understood correctly, but I think you might have race conditions in the code, for example here:
if(!requestMap.containsKey(emailDTO.getEmail()))
{
eventDeferredObject=new EventDeferredObject<Boolean>(emailDTO.getCustomerId(), emailDTO.getEmail());
requestMap.put(emailDTO.getEmail(), eventDeferredObject);
Thread t1=new Thread(eventDeferredObject);
t1.start();
}
else
{
eventDeferredObject=requestMap.get(emailDTO.getEmail());
}
think of a scenario in which you have two requests with the same key emailDTO.getEmail().
Request 1 checks if there is a key in the map, does not find it and puts it inside.
Request 2 comes some time later, checks if there is a key in the map, finds it, and
goes to fetch it; however just before that, the thread started by request 1 finishes and another thread, started by onComplete event, removes the key from the map. At this point,
requestMap.get(emailDTO.getEmail())
will return null, and as a result you will have a NullPointerException.
Now, this does look like a rare scenario, so I do not know if this is the problem you see.
I would try to modify the code as follows (I did not run it myself, so I might have errors):
public class EventDeferredObject<T> extends DeferredResult<Boolean> implements Runnable {
private Long customerId;
private String email;
private ConcurrentMap ourConcurrentMap;
#Override
public void run() {
...
this.setResult(result);
ourConcurrentMap.remove(this.email);
}
//Constructor and getter and setters
}
so the DeferredResult implementation has the responsibility to remove itself from the concurrent map. Moreover I do not use the onComplete to set a callback thread, as it seems to me an unnecessary complication. To avoid the race conditions I talked about before, one needs to combine somehow the verification of the presence of an entry with its fetching into one atomic operation; this is done by the putIfAbsent method of ConcurrentMap. Therefore I change the controller into
#RequestMapping(value="/sendVerificationDetails")
public class SendVerificationDetailsController {
private ConcurrentMap<String , EventDeferredObject<Boolean>> requestMap=new ConcurrentHashMap<String , EventDeferredObject<Boolean>>();
#RequestMapping(value="/sendEmail",method=RequestMethod.POST)
public EventDeferredObject<Boolean> sendEmail(#RequestBody EmailDTO emailDTO)
{
EventDeferredObject<Boolean> eventDeferredObject = new EventDeferredObject<Boolean>(emailDTO.getCustomerId(), emailDTO.getEmail(), requestMap);
EventDeferredObject<Boolean> oldEventDeferredObject = requestMap.putIfAbsent(emailDTO.getEmail(), eventDeferredObject );
if(oldEventDeferredObject == null)
{
//if no value was present before
Thread t1=new Thread(eventDeferredObject);
t1.start();
return eventDeferredObject;
}
else
{
return oldEventDeferredObject;
}
}
}
if this does not solve the problem you have, I hope that at least it might give some idea.

Resources