Can you iterate over caches managed by `org.ehcache.CacheManager`? - ehcache

Is it possible to iterate over caches managed by org.ehcache.CacheManager?

Yes, but the typed API means you have to go through the configuration information, to get the right key and value type for a give cache alias:
Configuration configuration = cacheManager.getRuntimeConfiguration();
for (Map.Entry<String, CacheConfiguration<?, ?>> entry : configuration.getCacheConfigurations().entrySet()) {
CacheConfiguration<?, ?> cacheConfig = entry.getValue();
Cache<?, ?> cache = cacheManager.getCache(entry.getKey(), cacheConfig.getKeyType(), cacheConfig.getValueType());
}

Related

Dynamically Adding Entity Sets to ODataConventionModelBuilder or ODataModelBuilder

Is there a way to dynamically add EntitySets to an ODataConventionModelBuilder.
I'm working on an OData service in .net. Some of the entities we'll be returning are coming from an external assembly. I read the the assembly just fine and get the relevant types but since those types are variables I'm not sure how to define them as entity sets.
Example:
public static void Register(HttpConfiguration config)
{
//some config house keeping here
config.MapODataServiceRoute("odata", null, GetEdmModel(), new DefaultODataBatchHandler(GlobalConfiguration.DefaultServer));
//more config housekeeping
}
private static IEdmModel GetEdmModel()
{
ODataConventionModelBuilder builder = new ODataConventionModelBuilder();
builder.Namespace = "SomeService";
builder.ContainerName = "DefaultContainer";
//These are the easy, available, in-house types
builder.EntitySet<Dog>("Dogs");
builder.EntitySet<Cat>("Cats");
builder.EntitySet<Horse>("Horses");
// Schema manager gets the rest of the relevant types from reading an assembly. I have them, now I just need to create entity sets for them
foreach (Type t in SchemaManager.GetEntityTypes)
{
builder.AddEntityType(t); //Great! but what if I want EntitySET ?
builder.Function(t.Name).Returns<IQueryable>(); //See if you can put correct IQueryable<Type> here.
//OR
builder.EntitySet<t>(t.Name); //exception due to using variable as type, even though variable IS a type
}
return builder.GetEdmModel();
}
Figured it out. Just add this line inside the loop:
builder.AddEntitySet(t.Name, builder.AddEntityType(t));

Elasticearch and Spark: Updating existing entities

What is the correct way, when using Elasticsearch with Spark, to update existing entities?
I wanted to something like the following:
Get existing data as a map.
Create a new map, and populate it with the updated fields.
Persist the new map.
However, there are several issues:
The list of returned fields cannot contain the _id, as it is not part of the source.
If, for testing, I hardcode an existing _id in the map of new values, the following exception is thrown:
org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest
How should the _id be retrieved, and how should it be passed back to Spark?
I include the following code below to better illustrate what I was trying to do:
JavaRDD<Map<String, Object>> esRDD = JavaEsSpark.esRDD(jsc, INDEX_NAME+"/"+TYPE_NAME,
"?source=,field1,field2).values();
Iterator<Map<String, Object>> iter = esRDD.toLocalIterator();
List<Map<String, Object>> listToPersist = new ArrayList<Map<String, Object>>();
while(iter.hasNext()){
Map<String, Object> map = iter.next();
// Get existing values, and do transformation logic
Map<String, Object> newMap = new HashMap<String, Object>();
newMap.put("_id", ??????);
newMap.put("field1", new_value);
listToPersist.add(newMap);
}
JavaRDD javaRDD = jsc.parallelize(ImmutableList.copyOf(listToPersist));
JavaEsSpark.saveToEs(javaRDD, INDEX_NAME+"/"+TYPE_NAME);
Ideally, I would want to update the existing map in place, rather than create a new one.
Does anyone have any example code to show, when using Spark, the correct way to update existing entities in elasticsearch?
Thanks
This is how I've done it (Scala/Spark 2.3/Elastic-Hadoop v6.5).
To read (id or other metadata):
spark
.read
.format("org.elasticsearch.spark.sql")
.option("es.read.metadata",true) // allow to read metadata
.load("yourindex/yourtype")
.select(col("_metadata._id").as("myId"),...)
To update particular columns in ES:
myDataFrame
.select("myId","columnToUpdate")
.saveToEs(
"yourindex/yourtype",
Map(
"es.mapping.id" -> "myId",
"es.write.operation" -> "update", // important to change operation to partial update
"es.mapping.exclude" -> "myId"
)
)
Try adding this upsert to your Spark:
.config("es.write.operation", "upsert")
that will let you add new fields to existing documents
According to Elasticsearch Configuration you can get document metadata like _id by set read metadata option to true:
.config("es.read.metadata", "true")
And i think you cannot use '_id' as field name.
But you can create new field with different name like:
newMap.put("idfield", yourId);
then set name of the new field as a value for mapping id option to inform elastic that this field has the document id:
.config("es.mapping.id", "idfield")
BTW don't forget to set write operation as update:
.config("es.write.operation", "update")

Store EhCache Object as value not reference?

I'm using EhCache all over the application and I have stumbled upon a problem.
I need to cache "raw" data (tree of maps and some lists). The cached value, after retrieved from cache is meant to be processed further (some elements filtered out, reordered etc).
My problem is that that I want to keep the original cached value intact - as it is meant to be used for some "post processing". So ultimately I want to store object "value/deep clone" not its reference.
An example code:
//create a List of Maps
List list = new ArrayList();
Map<String, String> map = new HashMap<String, String>();
map.put("key1", "v1");
map.put("key2", "v2");
list.add(map);
//add to cache
cache.put("cacheRegion", "list", list);
//now add a new element to list (2nd map)
list.add(new TreeMap());
//now remove 1 entry from the 1st Map
map = (Map<String, String>) list.get(0);
map.remove("key1");
list = (List) cache.get("cacheRegion", "list");
assertEquals("list should still have 1 element, despite adding new map after cache put", 1, list.size());
//check map
map = (Map<String, String>) list.get(0);
assertEquals("map should still contain 2 entries, as it was added to the cache", 2, map.size());
Does ehCache support that?
M
There is an attribute in ehcache (copyOnRead) which can be set to true for this.
the cache configuration will look something like :
<cache name="copyCache"
maxElementsInMemory="10"
eternal="false"
timeToIdleSeconds="5"
timeToLiveSeconds="10"
overflowToDisk="false"
copyOnRead="true"
copyOnWrite="true">
</cache>

How to get all Keys from Redis using redis template

I have been stuck with this problem with quite some time.I want to get keys from redis using redis template.
I tried this.redistemplate.keys("*");
but this doesn't fetch anything. Even with the pattern it doesn't work.
Can you please advise on what is the best solution to this.
I just consolidated the answers, we have seen here.
Here are the two ways of getting keys from Redis, when we use RedisTemplate.
1. Directly from RedisTemplate
Set<String> redisKeys = template.keys("samplekey*"));
// Store the keys in a List
List<String> keysList = new ArrayList<>();
Iterator<String> it = redisKeys.iterator();
while (it.hasNext()) {
String data = it.next();
keysList.add(data);
}
Note: You should have configured redisTemplate with StringRedisSerializer in your bean
If you use java based bean configuration
redisTemplate.setDefaultSerializer(new StringRedisSerializer());
If you use spring.xml based bean configuration
<bean id="stringRedisSerializer" class="org.springframework.data.redis.serializer.StringRedisSerializer"/>
<!-- redis template definition -->
<bean
id="redisTemplate"
class="org.springframework.data.redis.core.RedisTemplate"
p:connection-factory-ref="jedisConnectionFactory"
p:keySerializer-ref="stringRedisSerializer"
/>
2. From JedisConnectionFactory
RedisConnection redisConnection = template.getConnectionFactory().getConnection();
Set<byte[]> redisKeys = redisConnection.keys("samplekey*".getBytes());
List<String> keysList = new ArrayList<>();
Iterator<byte[]> it = redisKeys.iterator();
while (it.hasNext()) {
byte[] data = (byte[]) it.next();
keysList.add(new String(data, 0, data.length));
}
redisConnection.close();
If you don't close this connection explicitly, you will run into an exhaustion of the underlying jedis connection pool as said in https://stackoverflow.com/a/36641934/3884173.
try:
Set<byte[]> keys = RedisTemplate.getConnectionFactory().getConnection().keys("*".getBytes());
Iterator<byte[]> it = keys.iterator();
while(it.hasNext()){
byte[] data = (byte[])it.next();
System.out.println(new String(data, 0, data.length));
}
Try redisTemplate.setKeySerializer(new StringRedisSerializer());
Avoid to use keys command. It may ruin performance when it is executed against large databases.
You should use scan command instead. Here is how you can do it:
RedisConnection redisConnection = null;
try {
redisConnection = redisTemplate.getConnectionFactory().getConnection();
ScanOptions options = ScanOptions.scanOptions().match("myKey*").count(100).build();
Cursor c = redisConnection.scan(options);
while (c.hasNext()) {
logger.info(new String((byte[]) c.next()));
}
} finally {
redisConnection.close(); //Ensure closing this connection.
}
or do it much simplier with Redisson Redis Java client:
Iterable<String> keysIterator = redisson.getKeys().getKeysByPattern("test*", 100);
for (String key : keysIterator) {
logger.info(key);
}
Try
import org.springframework.data.redis.core.RedisTemplate;
import org.apache.commons.collections.CollectionUtils;
String key = "example*";
Set keys = redisTemplate.keys(key);
if (CollectionUtils.isEmpty(keys)) return null;
List list = redisTemplate.opsForValue().multiGet(keys);
It did work, but seems not recommended? Because we can't use Keys command in production. I assume RedisTemplate.getConnectionFactory().getConnection().keys is calling redis Keys command. What are the alternatives?
I was using redisTemplate.keys(), but it was not working. So I used jedis, it worked. The following is the code that I used.
Jedis jedis = new Jedis("localhost", 6379);
Set<String> keys = jedis.keys("*".getBytes());
for (String key : keys) {
// do something
} // for
Solution can be like this
String pattern = "abc"+"*";
Set<String> keys = jedis.keys(pattern);
for (String key : keys) {
jedis.keys(key);
}
Or you can use jedis.hscan() and ScanParams instead.

Clear Sitecore cache for an item from cache programmatically

I want to clear Sitecore cache for an item programmatically. I ran the code below. After that I tried to do a web.GetItem on the deleted id and I still get a null. Any suggestions?
Database db = new Database("web");
if (ID.IsID(id))
{
ID itemID = new ID(id);
//clear data cache
db.Caches.DataCache.RemoveItemInformation(itemID);
//clear item cache
db.Caches.ItemCache.RemoveItem(itemID);
//clear standard values cache
db.Caches.StandardValuesCache.RemoveKeysContaining(itemID.ToString());
//remove path cache
db.Caches.PathCache.RemoveKeysContaining(itemID.ToString());
}
Looks like you have missed the prefetch cache, here is how to get it:
private Cache GetPrefetchCache(Database database)
{
foreach (var cache in global::Sitecore.Caching.CacheManager.GetAllCaches())
{
if (cache.Name.Contains(string.Format("Prefetch data({0})", database.Name)))
{
return cache;
}
}
And the html cache also:
private void ClearAllHtmlCaches()
{
foreach (var info in Factory.GetSiteInfoList())
{
info.HtmlCache.Clear();
}
}

Resources