Caffeine versus Guava cache - caching

According to these micro benchmarks it turns out that Caffeine is a way faster than Guava cache in both read and write operations.
What is the secret of Caffeine implementation? How it differs from the Guava Cache?
Am I right that in case of timed expiration Caffeine use a scheduled executor to perform appropriate maintenance operations in background?

The main difference is because Caffeine uses ring buffers to record & replay events, whereas Guava uses ConcurrentLinkedQueue. The intent was always to migrate Guava over and it made sense to start simpler, but unfortunately there was never interest in accepting those changes. The ring buffer approach avoids allocation, is bounded (lossy), and cheaper to operate against.
The remaining costs are due to a design mismatch. The original author of MapMaker was enthusiastic about soft references as the solution to caching problems by deferring it to the GC. Unfortunately while that can seem fast in microbenchmarks, it has horrible performance in practice due to causing stop-the-world GC thrashing. The size-based solution had to be adapted into this work and that is not ideal. Caffeine optimizes for size-based and also gains an improved hash table, whereas Guava handles reference caching more elegantly.
Caffeine doesn't create its own threads for maintenance or expiration. It does defer the cost to the commonPool, which slightly improves user-facing latencies but not throughput. A future version might leverage CompletableFuture.delayedExecutor to schedule the next expiration event without directly creating threads (for users who have business logic depending on prompt removal notifications).
ConcurrentLinkedHashMap and MapMaker were written at the same time and CLHM has similar performance to Caffeine. I believe the difference is due to what scenarios the designers favored and optimized for, which impacted how other features would be implemented. There is low hanging fruit to allow Guava to have similar performance profile, but there isn't an internal champion to drive that (and even less so with Caffeine as a favored alternative).

Related

High frequent concurrency for cache

I am learning caching and have a question on the concurrency of cache.
As I know, LRU caching is implemented with double linked list + hashtable. Then how does LRU cache handle high frequent concurrency? Note both getting data from cache and putting data to cache will update the linked list and hash table so cache is modified all the time.
If we use mutex lock for thread-safe, won't the speed be slowed down if the cache is visited by large amount of people? If we do not use lock, what techniques are used? Thanks in advance.
Traditional LRU caches are not designed for high concurrency because of limited hardware and that the hit penalty is far smaller than the miss penalty (e.g. database lookup). For most applications, locking the cache is acceptable if its only used to update the underlying structure (not compute the value on a miss). Simple techniques like segmenting the LRU policy were usually good enough when the locks became contended.
The way to make an LRU cache scale is to avoid updating the policy on every access. The critical observation to make is that the user of the cache does not care what the current LRU ordering is. The only concern of the caller is that the cache maintains a threshold size and a high hit rate. This opens the door for optimizations by avoiding mutating the LRU policy on every read.
The approach taken by memcached is to discard subsequent reads within a time window, e.g. 1 second. The cache is expected to be very large so there is a very low chance of evicting a poor candidate by this simpler LRU.
The approach taken by ConcurrentLinkedHashMap (CLHM), and subsequently Guava's Cache, is to record the access in a buffer. This buffer is drained under the LRU's lock and by using a try-lock no other operation has to be blocked. CLHM uses multiple ring buffers that are lossy if the cache cannot keep up, as losing events is preferred to degraded performance.
The approach taken by Ehcache and redis is a probabilistic LRU policy. A read updates the entry's timestamp and a write iterates the cache to obtain a random sample. The oldest entry is evicted from that sample. If the sample is fast to construct and the cache is large, the evicted entry was likely a good candidate.
There are probably other techniques and, of course, pseudo LRU policies (like CLOCK) that offer better concurrency at lower hit rates.

When are page frame specific cache management policies useful?

I'm reading the O'Reilly Linux Kernel book and one of the things that was pointed out during the chapter on paging is that the Pentium cache lets the operating system associate a different cache management policy with each page frame. So I get that there could be scenarios where a program has very little spacial/temporal locality and memory accesses are random/infrequent enough that the probability of cache hits is below some sort of threshold.
I was wondering whether this mechanism is actually used in practice today? Or is it more of a feature that was necessary back when caches where fairly small and not as efficient as they are now? I could see it being useful for an embedded system with little overhead as far as system calls are necessary, are there other applications I am missing?
Having multiple cache management policies is widely used, whether by assigning whole regions using MTRRs (fixed/dynamic, as explained in Intel's PRM), MMIO regions, or through special instructions (e.g. streaming loads/stores, non-temporal prefetches, etc..). The use-cases also vary a lot, whether you're trying to map an external I/O device into virtual memory (and don't want CPU caching to impact its coherence), or whether you want to define a writethrough region for better integrity management of some database, or just want plain writeback to maximize the cache-hierarchy capacity and replacement efficiency (which means performance).
These usages often overlap (especially when multiple applications are running), so the flexibility is very much needed, as you said - you don't want data with little to no spatial/temporal locality to thrash out other lines you use all the time.
By the way, caches are never going to be big enough in the foreseeable future (with any known technology), since increasing them requires you to locate them further away from the core and pay in latency. So cache management is still, and is going to be for a long while, one of the most important things for performance critical systems and applications

How to avoid bottleneck performance?

A distributed system is described as scalable if it remains effective when there is a significant increase the number of resources and the number of users. However, these systems sometimes face performance bottlenecks. How can these be avoided?
The question is pretty broad, and depends entirely on what the system is doing.
Here are some things I've seen in systems to reduce bottlenecks.
Use caches, reducing network and disk bottlenecks. But remember that knowing when to evict from a cache eviction can a hard problem in some situations.
Use message queues to decouple components in the system. This way you can add more hardware to specific parts of the system that need it.
Delay computation when possible (often by using message queues). This takes the heat off the system during high-processing times.
Of course, design the system for parallel processing wherever possible. One host doing processing is NOT scalable. Note: most relational databases fall into the one-host bucket, this is why NoSQL has become suddenly popular; but not always appropriate (theoretically).
Use eventual consistency if possible. Strong consistency is much harder to scale.
Some are proponents for CQRS and DDD. Though I have never seen or designed a "CQRS system" nor a "DDD system," those have definitely affected the way I design systems.
There is a lot of overlap in the points above; some the techniques may use some of the others.
But, experience (your own and others) eventually teaches you about scalable systems. I keep up-to-date by reading about designs from google, amazon, twitter, facebook, and the like. Another good starting point is the high-scalability blog.
Just to build on a point discussed in the abover post, I would like to add that what you need for your distributed system is a distributed cache, so that when you intend on scaling your application the distributed cache acts like a "elastic" data-fabric meaning that you can increase storage capacity of the cache without compromising on performance and at the same time giving you a relaible platform that is accessible to multiple applications.
One such distributed caching solution is NCache. Do take a look!

What to avoid for performance reasons in multithreaded code?

I'm currently reviewing/refactoring a multithreaded application which is supposed to be multithreaded in order to be able to use all the available cores and theoretically deliver a better / superior performance (superior is the commercial term for better :P)
What are the things I should be aware when programming multithreaded applications?
I mean things that will greatly impact performance, maybe even to the point where you don't gain anything with multithreading at all but lose a lot by design complexity. What are the big red flags for multithreading applications?
Should I start questioning the locks and looking to a lock-free strategy or are there other points more important that should light a warning light?
Edit: The kind of answers I'd like are similar to the answer by Janusz, I want red warnings to look up in code, I know the application doesn't perform as well as it should, I need to know where to start looking, what should worry me and where should I put my efforts. I know it's kind of a general question but I can't post the entire program and if I could choose one section of code then I wouldn't be needing to ask in the first place.
I'm using Delphi 7, although the application will be ported / remake in .NET (c#) for the next year so I'd rather hear comments that are applicable as a general practice, and if they must be specific to either one of those languages
One thing to definitely avoid is lots of write access to the same cache lines from threads.
For example: If you use a counter variable to count the number of items processed by all threads, this will really hurt performance because the CPU cache lines have to synchronize whenever the other CPU writes to the variable.
One thing that decreases performance is having two threads with much hard drive access. The hard drive would jump from providing data for one thread to the other and both threads would wait for the disk all the time.
Something to keep in mind when locking: lock for as short a time as possible. For example, instead of this:
lock(syncObject)
{
bool value = askSomeSharedResourceForSomeValue();
if (value)
DoSomethingIfTrue();
else
DoSomtehingIfFalse();
}
Do this (if possible):
bool value = false;
lock(syncObject)
{
value = askSomeSharedResourceForSomeValue();
}
if (value)
DoSomethingIfTrue();
else
DoSomtehingIfFalse();
Of course, this example only works if DoSomethingIfTrue() and DoSomethingIfFalse() don't require synchronization, but it illustrates this point: locking for as short a time as possible, while maybe not always improving your performance, will improve the safety of your code in that it reduces surface area for synchronization problems.
And in certain cases, it will improve performance. Staying locked for long lengths of time means that other threads waiting for access to some resource are going to be waiting longer.
More threads then there are cores, typically means that the program is not performing optimally.
So a program which spawns loads of threads usually is not designed in the best fashion. A good example of this practice are the classic Socket examples where every incoming connection got it's own thread to handle of the connection. It is a very non scalable way to do things. The more threads there are, the more time the OS will have to use for context switching between threads.
You should first be familiar with Amdahl's law.
If you are using Java, I recommend the book Java Concurrency in Practice; however, most of its help is specific to the Java language (Java 5 or later).
In general, reducing the amount of shared memory increases the amount of parallelism possible, and for performance that should be a major consideration.
Threading with GUI's is another thing to be aware of, but it looks like it is not relevant for this particular problem.
What kills performance is when two or more threads share the same resources. This could be an object that both use, or a file that both use, a network both use or a processor that both use. You cannot avoid these dependencies on shared resources but if possible, try to avoid sharing resources.
Run-time profilers may not work well with a multi-threaded application. Still, anything that makes a single-threaded application slow will also make a multi-threaded application slow. It may be an idea to run your application as a single-threaded application, and use a profiler, to find out where its performance hotspots (bottlenecks) are.
When it's running as a multi-threaded aplication, you can use the system's performance-monitoring tool to see whether locks are a problem. Assuming that your threads would lock instead of busy-wait, then having 100% CPU for several threads is a sign that locking isn't a problem. Conversely, something that looks like 50% total CPU utilitization on a dual-processor machine is a sign that only one thread is running, and so maybe your locking is a problem that's preventing more than one concurrent thread (when counting the number of CPUs in your machine, beware multi-core and hyperthreading).
Locks aren't only in your code but also in the APIs you use: e.g. the heap manager (whenever you allocate and delete memory), maybe in your logger implementation, maybe in some of the O/S APIs, etc.
Should I start questioning the locks and looking to a lock-free strategy
I always question the locks, but have never used a lock-free strategy; instead my ambition is to use locks where necessary, so that it's always threadsafe but will never deadlock, and to ensure that locks are acquired for a tiny amount of time (e.g. for no more than the amount of time it takes to push or pop a pointer on a thread-safe queue), so that the maximum amount of time that a thread may be blocked is insignificant compared to the time it spends doing useful work.
You don't mention the language you're using, so I'll make a general statement on locking. Locking is fairly expensive, especially the naive locking that is native to many languages. In many cases you are reading a shared variable (as opposed to writing). Reading is threadsafe as long as it is not taking place simultaneously with a write. However, you still have to lock it down. The most naive form of this locking is to treat the read and the write as the same type of operation, restricting access to the shared variable from other reads as well as writes. A read/writer lock can dramatically improve performance. One writer, infinite readers. On an app I've worked on, I saw a 35% performance improvement when switching to this construct. If you are working in .NET, the correct lock is the ReaderWriterLockSlim.
I recommend looking into running multiple processes rather than multiple threads within the same process, if it is a server application.
The benefit of dividing the work between several processes on one machine is that it is easy to increase the number of servers when more performance is needed than a single server can deliver.
You also reduce the risks involved with complex multithreaded applications where deadlocks, bottlenecks etc reduce the total performance.
There are commercial frameworks that simplifies server software development when it comes to load balancing and distributed queue processing, but developing your own load sharing infrastructure is not that complicated compared with what you will encounter in general in a multi-threaded application.
I'm using Delphi 7
You might be using COM objects, then, explicitly or implicitly; if you are, COM objects have their own complications and restrictions on threading: Processes, Threads, and Apartments.
You should first get a tool to monitor threads specific to your language, framework and IDE. Your own logger might do fine too (Resume Time, Sleep Time + Duration). From there you can check for bad performing threads that don't execute much or are waiting too long for something to happen, you might want to make the event they are waiting for to occur as early as possible.
As you want to use both cores you should check the usage of the cores with a tool that can graph the processor usage on both cores for your application only, or just make sure your computer is as idle as possible.
Besides that you should profile your application just to make sure that the things performed within the threads are efficient, but watch out for premature optimization. No sense to optimize your multiprocessing if the threads themselves are performing bad.
Looking for a lock-free strategy can help a lot, but it is not always possible to get your application to perform in a lock-free way.
Threads don't equal performance, always.
Things are a lot better in certain operating systems as opposed to others, but if you can have something sleep or relinquish its time until it's signaled...or not start a new process for virtually everything, you're saving yourself from bogging the application down in context switching.

Caching Schemes for Managed Languages

This is mostly geared toward desktop application developers. How do I design a caching block which plays nicely with the GC? How do I tell the GC that I have just done a cache sweep and it is time to do a GC? How do I get an accurate measure of when it is time to do a cache sweep?
Are there any prebuilt caching schemes which I could borrow some ideas from?
While I obviously cannot speak to the specifics of your application, in most instances you should not tie your caching implementation to some perceived expectation for how the GC will work. As Stu mentions, calling GC.Collect() will force a collection (with overloads for a specific generation) but more often than not doing so will result in worse performance than just letting the GC manage itself.
If you do find (after doing some real performance testing) that you need to interact with the GC make sure you take into account the different types of GC's that the framework currently has (see here for more information).
All you'll ever need to know (and then some):
http://msdn.microsoft.com/en-us/library/ee817645.aspx
Oh, and GC.Collect() forces a collect.

Resources