Redis HashOperations delete all key - spring

I use Spring Redis HashOperations to manipulate data on Redis. It requires 3 paramters: key, haskkey and hashvalue
Currently, I can only delete a hashkey by HashOperations#delete(key, hashkey).
Is there anyway to delete all the hashkeys of a key other than iterating over all hashkeys ?

I have the same problem. I did use this one:
hashOperations
.entries(key).keySet().forEach(haskKey->hashOperations.delete(key,haskKey));
Not proud of it, but works.

Related

Redis : Get all keys by providing one of the value in the values list

In redis I'm planning to store key as a unique string and value will be a list.
I have a use case where I need to do 2 things.
First, I need to get all the values associated with a key by providing the key as input.
Second, I want to get all the keys associated with a value by providing one of the value in the values list.
Second part is where I need the advice, how we can achive this ?
I cannot get all the keys or key value pair and loop through because I will have millions of entries in Redis.
As mentioned in the comment above the retrieving of all keys with associated value at will probably sometimes create a performance issue as this will be a run through large entries.As also suggested in the official documentation about retrieving data from the memory caches you can try and use the following Redis command to get the value and see if that is what can solve your purpose.
GET
MGET

Get latest key value pair from Redis in Java Spring

I have an application on Java Spring that uses Redis fore some caching. Is there a way how to get a key or key-value pair that was added to the Redis last?
I have also 3 different types of values (Entities) that are stored to the Redis. Is there a way how get the latest record of one exact type of "value"?
Is Redis even suitable for such kind of things?
No Redis doesn't have this built-in functionality. You need to do it manually.
Whenever you set a key, you need to set that key name to an another key such as latest:key
set entity:1 value:1
set latest:key entity:1
get latest:key
You may also use hash to set the latest's key as field and value as hash value.
hset latest:key entity:1 value:1
hgetall latest:key

postgresql custom primarykey

I try to make a project using hibernate and postgres as DB. The problem I have is I need to store the primary key as this 22/2017 or like 432/1990.
Let's say the first number is object_id and second year_added.
I think what I want to achieve is to make a first number and second number together a primary key so 22/2017 is different from 22/2016.
The only idea I have is when user add new object I generate current date year and trying to find last id and increment it.
So next year first added object should be : 1/2018.
So far in my db only object_id is stored as a primary key.
This solution seems to work fine:
PostgreSQL: Auto-increment based on multi-column unique constraint
Thanks for helping me anyway.

Sub keys with expiration time in Redis

I'm not expert in redis so does anyone knows how can I create a key that can have subkeys, and these subkeys must have an expire time each one.
Is this possible in Redis??
It would be something like this:
[:keyX]
|
V
[:keyZ][:value]
|
V
EXPIRE keyZ 100
PS. the app is in ruby.
Thanks!
Redis does not have nested keys, although the Hash data type could work for you. Also, Redis expiry is only for keys - Hash fields, List elements or Sorted and regular Sets members can not be assigned with an independent TTL.
Your question does not detail why you're looking to do that (i.e. store keys under a "root" key and have each key expire on its own). You can get the per-key expiration effect by using plain ol' regular keys, or use a Hash to aggregate all the fields under one common key - but not both at the same time.
That said, if you really need this sort of functionality you can always try implementing it yourself - see here for a possible direction: Redis: To set timeout for a key value pair in Set

HBase row key design for reads and updates

I'm try to understand the best way to design the key for my HBase Table.
My use case :
Structure right now
PersonID | BatchDate | PersonJSON
When some thing about the person is modified, a new PersonJSON and new a batchdate is inserted in to Hbase updating the old records. And every 4 hours a scan of all the people who are modified are then pushed to Hadoop for further processing.
If my key is just personID it great for updating the data. But my performance sucks because I have to add a filter on BatchData column to scan all the rows greater than a batch date.
If my key is a composite key like BatchDate|PersonID I could use startrow and endrow on the row key and get all the rows that have been modified. But then I would have lot of duplicated since the key is not unique and can no longer update a person.
Is bloom filter on row+col (personid+batchdate) an option ?
Any help is appreciated.
Thanks,
Abhishek
In addition to the table with PersonID as the rowkey, it sounds like you need a dual-write secondary index, with BatchDate as the rowkey.
Another option would be Apache Phoenix, which provides support for secondary indexes.
I usually do two steps:
Create table one just have key is commbine of BatchDate+PersonId, value could be empty.
Create table two just as normal you did. Key is PersonId Value is the whole data.
For date range query: query table one first to get the PersonIds, and then use Hbase batch get API to get the data by batch. it would be very fast.

Resources