EHCache has too many misses on the heap tier - ehcache

I am using a standalone EHcache 3.10 with heap tier, and disk tier.
The heap is configured to contain 100 entries with no expirations.
Actually, the algorithm insert only 30 entries into the cache, but it perform many "updates" and "reads" of these 30 entries.
Before I insert a new entry to the cache, there is a check if the entry already exist.
Therefore when I check ehcache statistics, I expect to see only 30 - misses on the heap tier, and 30 misses on the disk tier. But,
Instead I get 9806 misses on the heap tier, and 9806 hits on the disk tier. (Meaning 9806 times ehcache did not found the entry on the heap, but instead it was found on the disk)
These numbers make no sense to me, since the heap tier should contain 100 entries, so, why there are so many misses?
Here is my configuration:
statisticsService = new DefaultStatisticsService();
// create temp dir
Path cacheDirectory = getCacheDirectory();
ResourcePoolsBuilder resourcePoolsBuilder =
ResourcePoolsBuilder.newResourcePoolsBuilder()
.heap(100, EntryUnit.ENTRIES)
.disk(Long.MAX_VALUE, MemoryUnit.B, true)
;
CacheConfigurationBuilder cacheConfigurationBuilder =
CacheConfigurationBuilder.newCacheConfigurationBuilder(
String.class, // The cache key
ARFileImpl.class, // The cache value
resourcePoolsBuilder)
.withExpiry(ExpiryPolicyBuilder.noExpiration()) // No expiration
.withResilienceStrategy(new ThrowingResilienceStrategy<>())
.withSizeOfMaxObjectGraph(100000);
// Create the cache manager
cacheManager =
CacheManagerBuilder.newCacheManagerBuilder()
// Set the persistent directory
.with(CacheManagerBuilder.persistence(cacheDirectory.toFile()))
.withCache(ARFILE_CACHE_NAME, cacheConfigurationBuilder)
.using(statisticsService)
.build(true);
Here is the result of the statistics:
Cache stats:
CacheExpirations: 0
CacheEvictions: 0
CacheGets: 6669684
CacheHits: 6669684
CacheMisses: 0
CacheHitPercentage: 100.0
CacheMissPercentage: 0.0
CachePuts: 10525
CacheRemovals: 0
Heap stats:
AllocatedByteSize: -1
Mappings: 30
Evictions: 0
Expirations: 0
Hits: 6659878
Misses: 9806
OccupiedByteSize: -1
Puts: 0
Removals: 0
Disk stats:
AllocatedByteSize: 22429696
Mappings: 30
Evictions: 0
Expirations: 0
Hits: 9806
Misses: 0
OccupiedByteSize: 9961952
Puts: 10525
Removals: 0
The reason for asking this question is that there are a lot of redundant disk reads, which result with performance degradation.

Related

Elasticsearch Slow as size parameter increases (not dependent on data size)

I have the following peculiar case for a simple GET request "query": {"match_all": {}},"sort": ["_doc"] with:
size = 0:
_source = ["X"]: 0.03 sec and 262 bytes
_source = ["X", "Y", "Z"]: 0.02 sec and 262 bytes
size = 10:
_source = ["X"]: 0.90 sec and 2944 bytes
_source = ["X", "Y", "Z"]: 0.88 sec and 949 Kb
size = 20:
_source = ["X"]: 1.20 sec and 5613 bytes
_source = ["X", "Y", "Z"]: 1.14 sec and 1006 Kb
size = 60:
_source = ["X"]: 2.30 sec and 9693 bytes
_source = ["X", "Y", "Z"]: 2.08 sec and 1120 Kb
The fields "X", "Y", "Z" are of type "text" and hold either years, either one / two words.
Seems to me that the increase in data size is not the problem of the slow query, but rather only the increase in size parameter.
For 1120Kb of data, it shouldn't take 2.5 seconds to retrieve the data - way too much.
I have an architecture of one cluster with one node and 5 indices. Each index has 1 shard allocated except for an index which has 2 shards.
RAM capacity is of 58GB and the heap size is 8GB.
The Elasticsearch version is 7.6
I already have:
request cache set to True
cache size equal to 20% of heap
preload of ["nvd", "dvd", "dvm", "doc"]
no explicit refresh
refresh_time of 2s on the index on which the queries are done
2 shards on on the index on which the queries are done
swap disabled
GET _cat/indices shows the heap is used 63% and the RAM is used 37% (it should be the other way around)
GET index/_settings shows the fetch phase is 76 times bigger than the query phase! (~2.3s vs 0.03s)
Where could the problem be?
Thanks in advance.

elasticsearch bulkload performance issue

We want to increase the speed of bulk-load.
Now we used JAVA to bulk load documents to Elasticsearch. We planned to import 10m documents each document size is almost 8M. Now we only can import 400K documents each day/ 5 documents every second.
Our ES infrastructure is 3 master node with 4G ES_JAVA_OPTS(heap size) 2 data nodes and 2 client nodes with 2G memory. When I want to increase the speed of bulk-load, we will get over the heap size issue. we set up the es cluster on Kubernetes.
The I/O is below.
dd if=/dev/zero of=/data/tmp/test1.img bs=1G count=10 oflag=dsync
10737418240 bytes (11 GB) copied, 50.7528 s, 212 MB/s
dd if=/dev/zero of=/data/tmp/test2.img bs=512 count=100000 oflag=dsync
51200000 bytes (51 MB) copied, 336.107 s, 152 kB/s
Any advice for the improvement?
for (int x =0; x<200000;x++) {
BulkRequest bulkRequest = new BulkRequest();
for (int k = 0; k < 50; k++) {
Order order = generateOrder();
IndexRequest indexRequest = new IndexRequest("orderpot", "orderpot");
Object esDataMap = objectToMap(order);
String source = JSONObject.valueToString(esDataMap);
indexRequest.source(source, XContentType.JSON);
bulkRequest.add(indexRequest);
}
rhlclient.bulk(bulkRequest, RequestOptions.DEFAULT);
over heap size
Seems you need more memory for data node.10m documents with 8M each will cost a lot of memory.And you can reduce the memory of master node and add on data nodes, master node need less memory than data nodes, and if there is no more nodes, you can combine the client nodes with data nodes, more data nodes with share the pressure.
Some other advise:
1. disable refresh by setting index.refresh_interval to -1 and set index.number_of_replicas to 0, when indexing.
2. set a mapping for your index, do not use default mapping, for example:some fields can be integer no need to use long, some fields can be text but keyword will never be used, and some fields will only be used as text.
[tune-for-indexing-speed given by official][1]https://www.elastic.co/guide/en/elasticsearch/reference/master/tune-for-indexing-speed.html

Ehcache miss counts

These are the statistics for my Ehcache.
I have it configured to use only memory (no persistence, no overflow to disk).
cacheHits = 50
onDiskHits = 0
offHeapHits = 0
inMemoryHits = 50
misses = 1194
onDiskMisses = 0
offHeapMisses = 0
inMemoryMisses = 1138
size = 69
averageGetTime = 0.061597
evictionCount = 0
As you can see, misses is higher than onDiskMisses + offHeapMisses + inMemoryMisses. I do have statistics strategy set to best effort:
cache.setStatisticsAccuracy(Statistics.STATISTICS_ACCURACY_BEST_EFFORT)
But the hits add up, and the difference between misses is rather large. Is there a reason why the misses do not add up correctly?
This quesiton is similar to Ehcache misses count and hitrate statistics, but the answer attributes the differences to the multiple tiers. There is only one tier here.
I'm almost certain that you're seeing this because inMemoryMisses does not include misses due to expired elements. On a get if the value is stored, but expired then you will not see an inMemoryMiss recorded, but you will see a cache miss.

Cassandra read latency high even with row caching, why?

I am testing cassandra performance with a simple model.
CREATE TABLE "NoCache" (
key ascii,
column1 ascii,
value ascii,
PRIMARY KEY (key, column1)
) WITH COMPACT STORAGE AND
bloom_filter_fp_chance=0.010000 AND
caching='ALL' AND
comment='' AND
dclocal_read_repair_chance=0.000000 AND
gc_grace_seconds=864000 AND
read_repair_chance=0.100000 AND
replicate_on_write='true' AND
populate_io_cache_on_flush='false' AND
compaction={'class': 'SizeTieredCompactionStrategy'} AND
compression={'sstable_compression': 'SnappyCompressor'};
I am fetching 100 columns of a row key using pycassa, get/xget function (). but getting read latency about 15ms in the server.
colums=COL_FAM.get(row_key, column_count=100)
nodetool cfstats
Column Family: NoCache
SSTable count: 1
Space used (live): 103756053
Space used (total): 103756053
Number of Keys (estimate): 128
Memtable Columns Count: 0
Memtable Data Size: 0
Memtable Switch Count: 0
Read Count: 20
Read Latency: 15.717 ms.
Write Count: 0
Write Latency: NaN ms.
Pending Tasks: 0
Bloom Filter False Positives: 0
Bloom Filter False Ratio: 0.00000
Bloom Filter Space Used: 976
Compacted row minimum size: 4769
Compacted row maximum size: 557074610
Compacted row mean size: 87979499
Latency of this type is amazing! When nodetool info shows that read hits directly in the row cache.
Row Cache : size 4834713 (bytes), capacity 67108864 (bytes), 35 hits, 38 requests, 1.000 recent hit rate, 0 save period in seconds
Can anyone tell me why is cassandra taking so much time while reading from row cache?
Enable tracing and see what it's doing. http://www.datastax.com/dev/blog/tracing-in-cassandra-1-2

Caching not Working in Cassandra

I dont seem to have any caching enabled when checking in Opscenter or cfstats. Im running Cassandra 1.1.7 with Solandra on Debian. I have set the required global options in cassandra.yaml:
key_cache_size_in_mb: 800
key_cache_save_period: 14400
row_cache_size_in_mb: 800
row_cache_save_period: 15400
row_cache_provider: SerializingCacheProvider
Column Families were created as follows:
create column family example
with column_type = 'Standard'
and comparator = 'BytesType'
and default_validation_class = 'BytesType'
and key_validation_class = 'BytesType'
and read_repair_chance = 1.0
and dclocal_read_repair_chance = 0.0
and gc_grace = 864000
and min_compaction_threshold = 4
and max_compaction_threshold = 32
and replicate_on_write = true
and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'
and caching = 'ALL';
Opscenter shows no data available on caching graphs and CFSTATS doesn't show any cache related fields:
Column Family: charsets
SSTable count: 1
Space used (live): 5558
Space used (total): 5558
Number of Keys (estimate): 128
Memtable Columns Count: 0
Memtable Data Size: 0
Memtable Switch Count: 0
Read Count: 61381
Read Latency: 0.123 ms.
Write Count: 0
Write Latency: NaN ms.
Pending Tasks: 0
Bloom Filter False Postives: 0
Bloom Filter False Ratio: 0.00000
Bloom Filter Space Used: 16
Compacted row minimum size: 1917
Compacted row maximum size: 2299
Compacted row mean size: 2299
Any help or suggestions are appreciated.
Sam
The caching stats have been moved from cfstats to info in Cassandra 1.1. If you run nodetool info you should see something like:
Key Cache : size 5552 (bytes), capacity 838860800 (bytes), 38 hits, 47 requests, 0.809 recent hit rate, 14400 save period in seconds
Row Cache : size 0 (bytes), capacity 838860800 (bytes), 0 hits, 0 requests, NaN recent hit rate, 15400 save period in seconds
This is because there are now global caches, rather than per-CF. It seems that Opscenter needs updating for this change - maybe there is a later version available that will work.

Resources