Java Caching System(JCS) - Create regions programmatically - caching

We are planning to use some caching mechanism in our application, and chosen Java Caching System(JCS) after performing comparison study among many other caching solutions. Every thing is fine when I use external configuration (cache.ccf) to define cache regions and its properties( like maxlife, ideltime, etc).
But the requirement is changed to have dynamic cache regions, that is we would need to define cache regions and its properties at run time. I am not able to find more details or samples regarding this operation.
I successfully created cache regions at run time ( using below method signature).
ICompositeCacheAttributes cattr=..
IElementAttributes attr = new ElementAttributes();
attr.setIsEternal(false);
attr.setMaxLifeSeconds( maxLife );
defineRegion(name, cattr,attr);
But the problem is, IElmentAttributes does not sets to the cache. I did research on source of JCS and found attr is never set. It is non used argument!! bit strange
After some more googling, I found below options to set the attributes manually, but still did not work
IElementAttributes attr = new ElementAttributes();
attr.setIsEternal(false);
attr.setMaxLifeSeconds( maxLife );
jcs.setDefaultElementAttributes(attr);
All I want is to set maxLifeSeconds for created regions.

I found way through for my problem, we need to set attributes when you put the data in to cache. See the implementation for somebody who is interested,
JCS jcs = JCS.getInstance("REGION");
IElementAttributes attr = new ElementAttributes();
attr.setIsEternal(false);
attr.setMaxLifeSeconds( maxLife );
jcs.put("Key",data, attr);

Sample code here
Properties props = new Properties();
//
// Default cache configs
//
props.put("jcs.default", "");
props.put("jcs.default.cacheattributes","org.apache.jcs.engine.CompositeCacheAttributes");
props.put("jcs.default.cacheattributes.MaxObjects","1000");
props.put("jcs.default.cacheattributes.MemoryCacheName", "org.apache.jcs.engine.memory.lru.LRUMemoryCache");
props.put("jcs.default.cacheattributes.UseMemoryShrinker", "true");
props.put("jcs.default.cacheattributes.MaxMemoryIdleTimeSeconds", "3600");
props.put("jcs.default.cacheattributes.ShrinkerIntervalSeconds", "60");
props.put("jcs.default.cacheattributes.MaxSpoolPerRun", "500");
//
// Region cache
//
props.put("jcs.region.myregionCache", "");
props.put("jcs.region.myregionCache.cacheattributes", "org.apache.jcs.engine.CompositeCacheAttributes");
props.put("jcs.region.myregionCache.cacheattributes.MaxObjects", "1000");
props.put("jcs.region.myregionCache.cacheattributes.MemoryCacheName", "org.apache.jcs.engine.memory.lru.LRUMemoryCache");
props.put("jcs.region.myregionCache.cacheattributes.UseMemoryShrinker", "true");
props.put("jcs.region.myregionCache.cacheattributes.MaxMemoryIdleTimeSeconds", "3600");
props.put("jcs.region.myregionCache.cacheattributes.ShrinkerIntervalSeconds", "60");
props.put("jcs.region.myregionCache.cacheattributes.MaxSpoolPerRun", "500");
props.put("jcs.region.myregionCache.cacheattributes.DiskUsagePatternName", "UPDATE");
props.put("jcs.region.myregionCache.elementattributes", "org.apache.jcs.engine.ElementAttributes");
props.put("jcs.region.myregionCache.elementattributes.IsEternal", "false");
...
// Configure
CompositeCacheManager ccm = CompositeCacheManager.getUnconfiguredInstance();
ccm.configure(props);
// Access region
CompositeCache myregionCache = ccm.getCache("myregionCache");
...

Related

How to write a simple business logic in spring web flux mono

This simple logic I want to take a mono locality object from AddressRepository and if the locality exist it will update else new record will be added into locality table.
As I am very new to spring reactive programming, I am little confuse about the implementation. I have written following code segments.
Address address = new Address();
// If locality getByName == true -> Update
this.addressRepository.findByName(addressCommand.getLocality())
.convert().with(toMono())
.subscribe(item -> {
if(item.getLocalityName().equalsIgnoreCase(addressCommand.getLocality())){
address.setLocality(item);
}else{
address.setLocality(Locality.builder()
.localityName(addressCommand.getLocality()).build());
}
});
In the same method I have another where set the above address and return user Id to client.
Mono<User> createdUser = this.userRepository.adminCreate(
User.builder()
.address(address)
.administrativeArea(administrativeArea)
.build()
)
.convert().with(toMono());
return createdUser.map(u -> u.getCustomerId().toString());
If I execute this in debug mode and if I put debug points I can see user data setting doing first and after some time address setting happening. So always locality data will be null. I wonder can I do this within same pipeline or any suggestions?
Thanks,
Dasun.

How can I enable automatic slicing on Elasticsearch operations like UpdateByQuery or Reindex using the Nest client?

I'm using the Nest client to programmatically execute requests against an Elasticsearch index. I need to use the UpdateByQuery API to update existing data in my index. To improve performance on large data sets, the recommended approach is to use slicing. In my case I'd like to use the automatic slicing feature documented here.
I've tested this out in the Kibana dev console and it works beautifully. I'm struggling on how to set this property in code through the Nest client interface. here's a code snippet:
var request = new Nest.UpdateByQueryRequest(indexModel.Name);
request.Conflicts = Elasticsearch.Net.Conflicts.Proceed;
request.Query = filterQuery;
// TODO Need to set slices to auto but the current client doesn't allow it and the server
// rejects a value of 0
request.Slices = 0;
var elasticResult = await _elasticClient.UpdateByQueryAsync(request, cancellationToken);
The comments on that property indicate that it can be set to "auto", but it expects a long so that's not possible.
// Summary:
// The number of slices this task should be divided into. Defaults to 1, meaning
// the task isn't sliced into subtasks. Can be set to `auto`.
public long? Slices { get; set; }
Setting to 0 just throws an error on the server. Has anyone else tried doing this? Is there some other way to configure this behavior? Other APIs seem to have the same problem, like ReindexOnServerAsync.
This was a bug in the spec and an unfortunate consequence of generating this part of the client from the spec.
The spec has been fixed and the change will be reflected in a future version of the client. For now though, it can be set with the following
var request = new Nest.UpdateByQueryRequest(indexModel.Name);
request.Conflicts = Elasticsearch.Net.Conflicts.Proceed;
request.Query = filterQuery;
((IRequest)request).RequestParameters.SetQueryString("slices", "auto");
var elasticResult = await _elasticClient.UpdateByQueryAsync(request, cancellationToken);

How to set JsonSerializer default settings globally

For System.Text.Json.JsonSerializer, I have to set the options every time I serialize or deserialize or have to set attributes on every property of the object, due to lack of a way to set/change the default settings. At least I am not able to find one.
JsonSerializer.Deserialize<TypeListDTO>(
"{\"listNo\":33}",
new JsonSerializerOptions() { PropertyNamingPolicy = JsonNamingPolicy.CamelCase});
Is there such a way available? If not, is there a workaround available?
EDIT: I am using .Net Core 3 with Endpoint Routing. But could very well not be using it at all.
Try it with AddJsonOptions(Action) in Startup.ConfigureServices:
services.AddMvc()
.AddJsonOptions( options =>
{
options.SerializerSettings.Formatting = Formatting.Indented;
options.SerializerSettings.TypeNameHandling = TypeNameHandling.Objects;
options.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver();
});

When an object was added to Caffeine cache

I'm using Spring Boot caching in a project and am using Caffiene. I've added some default configuration for Caffeine to the project, and I can get the most recent object from the cache using the following code:
private final CaffeineCache caffeineCache = (CaffeineCache) caffeineCacheManager.getCache("myCacheName");
Cache<Object, Object> cache = this.caffeineCache.getNativeCache();
cache.policy().eviction().get().hottest(1);
I don't actually want the object itself, but I'd want to know when it was added to the cache. Is there a way to find out when this object was added to the cache?
Thanks to Ben Maines comment, I found a solution to my problem. I calculated the cache time using the expiration timestamp:
CacheResponse cacheResponse = new CacheResponse();
Cache<Object, Object> cache = this.caffeineCache.getNativeCache();
if(cache.policy().eviction().get().hottest(1).keySet().iterator().hasNext()) {
// Time in milliseconds since object was cached
OptionalLong time = cache.policy().expireAfterWrite().get().ageOf(cache.policy().eviction().get().hottest(1).keySet().iterator().next(), TimeUnit.MILLISECONDS);
LOGGER.info("Calculating last UTC cached date");
// Calculate last cached time and set the object in the cache response
cacheResponse.setTime(Instant.now().minusMillis(time.getAsLong()).atZone(ZoneOffset.UTC).toLocalDateTime());
} else {
LOGGER.info("Cache appears to be empty.");
}
cacheResponse.setNumCachedItems(cache.estimatedSize());
return cacheResponse;

Setting Time To Live (TTL) from Java - sample requested

EDIT:
This is basically what I want to do, only in Java
Using ElasticSearch, we add documents to an index bypassing IndexRequest items to a BulkRequestBuilder.
I would like for the documents to be dropped from the index after some time has passed (time to live/ttl)
This can be done either by setting a default for the index, or on a per-document basis. Either approach is fine by me.
The code below is an attempt to do it per document. It does not work. I think it's because TTL is not enabled for the index. Either show me what Java code I need to add to enable TTL so the code below works, or show me different code that enables TTL + sets default TTL value for the index in Java I know how to do it from the REST API but I need to do it from Java code, if at all possible.
logger.debug("Indexing record ({}): {}", id, map);
final IndexRequest indexRequest = new IndexRequest(_indexName, _documentType, id);
final long debug = indexRequest.ttl();
if (_ttl > 0) {
indexRequest.ttl(_ttl);
System.out.println("Setting TTL to " + _ttl);
System.out.println("IndexRequest now has ttl of " + indexRequest.ttl());
}
indexRequest.source(map);
indexRequest.operationThreaded(false);
bulkRequestBuilder.add(indexRequest);
}
// execute and block until done.
BulkResponse response;
try {
response = bulkRequestBuilder.execute().actionGet();
Later I check in my unit test by polling this method, but the document count never goes down.
public long getDocumentCount() throws Exception {
Client client = getClient();
try {
client.admin().indices().refresh(new RefreshRequest(INDEX_NAME)).actionGet();
ActionFuture<CountResponse> response = client.count(new CountRequest(INDEX_NAME).types(DOCUMENT_TYPE));
CountResponse countResponse = response.get();
return countResponse.getCount();
} finally {
client.close();
}
}
After a LONG day of googling and writing test programs, I came up with a working example of how to use ttl and basic index/object creation from the Java API. Frankly most of the examples in the docs are trivial, and some JavaDoc and end-to-end examples would go a LONG way to help those of us who are using the non-REST interfaces.
Ah well.
Code here: Adding mapping to a type from Java - how do I do it?

Resources