I have a redisdatabase in my spring bootapplication. I use Jedis as my Redis client for retrieval, update and delete.
Now I'm willing to add distributed locks to my service, so I thought about using Redisson distributed locks. So my Redisson lock will be used to acquire and release lock around some logic of Jedis client.
Example:
redissonLock.acquire();
doSomeReadAndUpdateOperationsByJedis()
redissonLock.release();
Will using two redis clients work here? And if not, what is the best way to use a distributed lock with Jedis.
Related
Usual org.springframework.data.redis.core.RedisTemplate have that multi() method, which allow to start a transaction and exec() to commit.
But org.springframework.data.redis.core.ReactiveRedisTemplate does not have that methods.
I searched a lot for an any way on how to use transaction with spring-boot-starter-data-redis-reactive and i found no solutions.
The only way i see it now is manualy create Lettuce Client Bean and use it along side Spring implementation. But that is not handy to have 2 separate redis clients.
Does anyone know how to use redis transaction with spring-boot-starter-data-redis-reactive? could you please write a simple example?
I have a requirement to develop some new feature in a given application which is currently using Jedis for redis operations.
I need to use Redis locks extensively for the new feature and redisson supports them very well.Can I use Redisson client with same redis cluster in my application or will it cause an issue?
The new flow is entirely different and there will be no interesction between operations through redisson and Jedis
regards
Ankit
You can use Redisson along with Jedis with same Redis cluster in same application. This should not cause any problem.
I have a Spring Boot application that uses MongoDB. My plan is to store data in a distributed caching system before it gets inserted into Mongo. If the database fails, the caching will have a queue and send to the DB once it is up. So, the plan is to make the caching layer in between the application and Mongo.
Can you suggest some ideas on how to implement this using Apache Ignite?
Take a look at write-behind cache store mode. It retries writing to the underlying database if insertion to the underlying DB fails. Let me know how it works for you.
You can also implement a custom CacheStore for an Ignite cache that will do the caching and enable write through for it. If the connection is lost, then you'll be able to collect entries in a buffer, while retrying to establish the connection back.
See more: https://apacheignite.readme.io/docs/3rd-party-store
I am using spring-data-redis as the data access layer for Redis, for the data distribution, I tried to use the sharding feature of jedis, but looks spring-data-redis DOES NOT support sharding officially, is there any workaround or 3rd party library can support sharding by spring-data-reids?
thanks,
Emre
I've used twemproxy successfully to shard data across several redis nodes.
I used spring-data-redis as well as others (non java) clients to access it. Since twemproxy 'speaks' the redis protocol, it is (almost) transparent for the clients.
Is it possible to use transactions when Neo4j is used as standalone server? I am using functions from my Spring repositories, and probably each of them is executed as a separate transaction, but I would like to merge them into one. Is it possible to do this?
SDN doesn't support remote transactions (which only work with the transactional endpoint and Cypher) yet.
So the option you have to speed your operation up is to move the processing of the SDN entities into the server an expose a domain level REST API to your clients (either with Jersey, or SD-REST).
see: http://inserpio.wordpress.com/2014/04/30/extending-the-neo4j-server-with-spring-data-neo4j/