Using Time To Live(TTL) for spring data cassandra repository - spring

Not able use TTL with spring data CassandraRepository based implementation.
Spring data cassandra version: Latest
I am trying to use TTL property of cassandra for save operation using spring data repository based implementation. however looking at the reference documentation(https://docs.spring.io/spring-data/cassandra/docs/current/reference/html/) i dont see any straight forward way of using it.
Even though docs mentioned that we can use it but no example provided for repository based implementation. Do note that i see some example using cqlTemplate and cassadraOperations. But none for repository.
No code written yet as I am trying to figure out how to use it
Expectation would be some kind #TTL(value in seconds) annotation on repository save/update method for easier implementation.

Refer to A Sarkar's answer from this post TTL support in spring boot application using spring-data-cassandra
Please see my sample code here, https://github.com/nontster/spring-data-cassandra-demo
I borrow sample code from this tutorial https://www.baeldung.com/spring-data-cassandra-tutorial
You need to create demo keyspace before you can run this code,
CREATE KEYSPACE demo WITH replication = {'class':'SimpleStrategy', 'replication_factor' : 1};
Run saveBookTest() method in BookRepositoryIntegrationTest.java and you can see countdown TTL in column via (I set TTL to 600 seconds)
cqlsh:demo> SELECT title,TTL(year) FROM Book WHERE title='Head First Java' AND publisher='O''Reilly Media';
title | ttl(year)
-----------------+-----------
Head First Java | 597
(1 rows)

Related

Hazelcast cache statistics, can't get number of puts of a given cache name

I'm using Hazelcast as second cache of Hibernate in a Spring Boot application and I'm facing a problem to get some cache statistics. Calling
this.hazelcastInstance.getMap(cacheName).getLocalMapStats()
I'm getting values that don't seem to me to be true. For example, for a given cacheName
this.hazelcastInstance.getMap(cacheName).getLocalMapStats().getPutOperationCount() // = 0
and the same cacheName has a map of 14 elements when looking at
hazelcastInstance.getDistributedObjects()
Do you have any idea why LocalMapStats object doesn't seem to have the right value?

ElasticSearch 6 High Level Rest Client preparePutMapping

I am trying to transition from ElasticSearch 2 to either 5 or 6. I think that I want to jump straight to 6.1.1 and use the RestHighLevelClient, since it is closer to the existing transport client that I am using than the low level rest client.
However, I am running across a problem As part of my integration tests, I'm creating an index and inserting particular data, so I know that my queries are correct. I can't seem to do that in the High Level client. In particular, I want to be able to call:
RestHighLevelClient client = new RestHighLevelClient(RestClient.builder(new HttpHost(host, port)))
client.indices()
.preparePutMapping(databaseName)
.setType(tableName).etc....
However, client.indices() returns an org.elasticsearch.client.IndicesClient (from org.elasticsearch.client:elasticsearch-rest-high-level-client:6.1.1) which does not have a preparePutMapping(). I need a org.elasticsearch.client.IndicesAdminClient (from org.elasticsearch:elasticsearch:6.1.1). I can't figure out how to get that, from either RestHighLevelClient or RestClient.
Am I out of luck? Is this just transition pains that it has not been implemented yet? Or something more permanent?
Looking at the documentation:
client.admin().indices()
.preparePutMapping(databaseName)
.setType(tableName)
...
Looks like all you need is an additional .admin().

using Redis in Openstack Keystone, some Rubbish in redis

Recently, I'm using Redis to cache token for OpenStack Keystone. The function is fine, but some expired cache data still in Redis.
my Keystone config:
[cache]
enabled=true
backend=dogpile.cache.redis
backend_argument=url:redis://127.0.0.1:6379
[token]
provider = uuid
caching=true
cache_time= 3600
driver = kvs
expiration = 3600
but some expired data in Redis:
Data was over expiration time, but still in here, because the TTL is -1.
My question:
How can I change settings to stop this rubbish data created?
Is some gracefully way to clean it up?
I was trying to use command 'keystone-manage token_flush', but after reading code, I realized this command just clean up the expired tokens in Mysql
I hope this question still relevant.
I'm trying to do the same thing as you are, and for now the only option I found working is the argument on dogpile.cache.redis: redis_expiration_time.
Checkout the backend dogpile.redis API or source code.
http://dogpilecache.readthedocs.io/en/latest/api.html#dogpile.cache.backends.redis.RedisBackend.params.redis_expiration_time
The only problem with this argument is that it does not let you choose a different TTL for different categories, for example you want tokens for 10 minutes and catalog for 24 hours or so. The other parameters on keystone.conf just don't work from my experience (expiration_time and cache_time on each category)... Anyway this problem isn't relevant if you are using redis to store only keystone tokens.
[cache]
enabled=true
backend=dogpile.cache.redis
backend_argument=url:redis://127.0.0.1:6379
// Add this line
backend_argument=redis_expiration_time:[TTL]
Just replace the [TTL] with your wanted ttl and you'll start noticing keys with ttl in redis and after a while you will see that they are no more.
about the second question:
This is maybe not the best answer you'll see, but you can use OBJECT idletime [key] command on redis-cli to see how much time the specific key wasn't used (even GET reset idletime). You can delete the keys that have bigger idletime than your token revocation using a simple script.
Remember that the data on Redis isn't persistent data, meaning you can always use FLUSHALL and your OpenStack and keystone will work as usual, but ofc the first authentications will take longer.

Changing hazelcast configuration at runtime

Is it possible to change Hazelcast configuration at runtime and if so what parameters are modifiable.
It seems to be possible using Hazelcast Management Center but can't find any examples/references in official docos/forums.
Might be a bit late to answer your question but better late than never :)
You can modify some of the map config properties after the map has been created using the MapService:
HazelcastInstance instance = Hazelcast.newHazelcastInstance();
// create map
IMap<String, Integer> myMap = instance.getMap("myMap");
// create a new map config
MapConfig newMapConfig = instance.getConfig().getMapConfig("myMap").setAsyncBackupCount(1);
// submit the new map config to the map service
MapService mapService = (MapService)(((AbstractDistributedObject)instance.getDistributedObject(MapService.SERVICE_NAME, "")).getService());
mapService.getMapServiceContext().getMapContainer("myMap").setMapConfig(newMapConfig);
Note that this API is not visible/documented so it might not work in future versions.
We are using this in our application when we need to insert several million entries in a distributed map at startup. Disabling the backup cut the insertion time by 30%. After the data are inserted, we enable the backup.
The Hazelcast internals are not really designed to be modifiable. What do you want to modify?

How to implement "Distributed cache clearing" in Ofbiz?

We have multiple instances of Ofbiz/Opentaps running. All the instances talk to the same database. There are many tables that are rarely updated hence they are cached and all the instances maintain their individual copies of cache as a standard Ofbiz cache mechanism. But in rare situations when we update some entity using one of many instances then all other instances keep showing dirty cache data. So it requires a manual action to go and clear all the cache copies on other instances as well.
I want this cache clearing operation on all the instances to happen automatically. On Ofbiz confluence page here there is a very brief mention of "Distributed cache clearing". It relies on JMS it seems so whenever an instance's cache is cleared it sends notification over JMS to a topic and other instances subscribing to the same JMS topic clear their corresponding copies of cache upon this notification. But I could not find any other reference or documentation on how to do that? What are the files that need to be updated to set it all up in Ofbiz? An example page/link is what I'm looking for.
Alright I believe I've figured it all out. I have used ActiveMQ as my JMS broker to set it up so here are the steps in Ofbiz to make it working:
1. Copy activemq-all.jar to framework/base/lib folder inside your Ofbiz base directory.
2. Edit File base/config/jndiservers.xml: Add following definition inside <jndi-config> tag:
<jndi-server name="activemq"
context-provider-url="failover:(tcp://jms.host1:61616,tcp://jms.host2:61616)?jms.useAsyncSend=true&timeout=5000"
initial-context-factory="org.apache.activemq.jndi.ActiveMQInitialContextFactory"
url-pkg-prefixes=""
security-principal=""
security-credentials=""/>
3. Edit File base/config/jndi.properties: Add this line at the end:
topic.ofbiz-cache=ofbiz-cache
4. Edit File service/config/serviceengine.xml: Add following definition inside <service-engine> tag:
<jms-service name="serviceMessenger" send-mode="all">
<server jndi-server-name="activemq"
jndi-name="ConnectionFactory"
topic-queue="ofbiz-cache"
type="topic"
listen="true"/>
</jms-service>
5. Edit File entityengine.xml: Change default delegator to enable distributed caching:
<delegator name="default" entity-model-reader="main" entity-group-reader="main" entity-eca-reader="main" distributed-cache-clear-enabled="true">
6. Edit File framework/service/src/org/ofbiz/service/jms/AbstractJmsListener.java: This one is probably a bug in the Ofbiz code
Change following line from:
this.dispatcher = GenericDispatcher.getLocalDispatcher("JMSDispatcher", null, null, this.getClass().getClassLoader(), serviceDispatcher);
To:
this.dispatcher = GenericDispatcher.getLocalDispatcher("entity-default", null, null, this.getClass().getClassLoader(), serviceDispatcher);
7. And finally build the serviceengine code by issuing following command:
ant -f framework/service/build.xml
With this entity data changes in Ofbiz on one instances are immediately propagated to all the other Ofbiz instances clearing cache line item on its own without any need of manual cache clearing.
Cheers.
I have a added a page on this subject in OFBiz wiki https://cwiki.apache.org/OFBIZ/distributed-entity-cache-clear-mechanism.html. Though it's well explained here, the OFBiz wiki page adds other important information.
Note that the bug reported here has been fixed since, but another is currently pending, I should fix it soon https://issues.apache.org/jira/browse/OFBIZ-4296
Jacques
Yes, I fixed this behaviour sometimes ago at http://svn.apache.org/viewvc?rev=1090961&view=rev. But it still needs another fix related to https://issues.apache.org/jira/browse/OFBIZ-4296.
The patch below fixes this issue locally, but still creates 2 listeners on clusters, not sure why... Still investigating (not a priority)...
Index: framework/entity/src/org/ofbiz/entity/DelegatorFactory.java
===================================================================
--- framework/entity/src/org/ofbiz/entity/DelegatorFactory.java (revision 1879)
+++ framework/entity/src/org/ofbiz/entity/DelegatorFactory.java (revision 2615)
## -39,10 +39,10 ##
if (delegator != null) {
+ // setup the distributed CacheClear
+ delegator.initDistributedCacheClear();
+
// setup the Entity ECA Handler
delegator.initEntityEcaHandler();
//Debug.logInfo("got delegator(" + delegatorName + ") from cache", module);
-
- // setup the distributed CacheClear
- delegator.initDistributedCacheClear();
return delegator;
Please notify me using #JacquesLeRoux in your post, if ever you have something new to share.

Resources