Resetting Redis sorted set - sorting

I m using redis server to sort scores in an online game.
The gameplay is divided into play sequence of 2-3 minutes and after each sequences scores are displayed and user get his rank (zadd, zrevrank)
All is ok with this, but how do I can reset my sorted set after my play sequence ? I found ZREM but looks like I have to specify keys. is there a quick way to remove all values from the sorted set ?

Couldn't you simply use
del setname
To delete the set?

Related

validate and processing data in Redis sorted set efficiently

we have micro service (written in Go lang) which it's primary purpose is to get the logs from multiple IoT devices and do some processing on them and put the result into a PostgreSQL table. The way the system works is that each device has its own sorted set which the logs will be saved there and for each log the score would be a timestamp ( of course I know time series would be a better decision but we currently want to worked with sorted sets). know this logs come every 1 second from each device.
I want to process the data inside these sets every 5 second, but for each set, the logs inside should pass some tests:
there should be more than one log inside the set
two logs can be removed from the set, if the time difference between timestamps is 1 second
when the logs are validated then they can be passed to other methods or functions to the the rest of the processing. If logs are invalid ( there exists a log that has time difference of more than 1 second with other logs) then it go's back to the set and wait for the next iteration to be checked again.
Problem:
My problem is basically that I don't know how to get the data out of the list, validate them and put them back again! to be more clear for each set,all or none of the logs inside can be removed, and this occurs while new data is coming in contently, and since I cant validate the data with redis it self I don't know what to do. My current solution is as follows:
Every 5 seconds, all data from each set should be removed from Redis and saved in some data structure inside the code ( like a list...) the after validating, some logs that are not yet validated should be putted back to Redis. as you can see these solution needs two database access from the code, and when putting the invalid logs, they should be sorted by Redis ...
when the logs are so much and there are many devices, I think this solution is not the best way to go. I'm not very experienced with Redis so would be thankful to give your comments on the problem. Thanks
Since you decided to use sorted sets, here are things to know first
"there should be more than one log inside the set". If there is no element in the sorted set, than the set/key doesn't exist. You can check if there is any log in the sorted set via two different commands; zcard and exists - both works in O(1).
There can't be same log(really same) in the sorted set more than once. You need an identifier(such as timestamp, uuid, hash etc) to separate each individual log from each other in a single sorted set. It will update the score of existing element (it may not be what you want)
127.0.0.1:6379> zadd mydevice 1234 "log-a"
(integer) 1
127.0.0.1:6379> zadd mydevice 12345 "log-a"
(integer) 0
127.0.0.1:6379> zrange mydevice 0 -1 withscores
1) "log-a"
2) "12345"
127.0.0.1:6379>
There is no single way to do this on data layer with built-in methods. You will need application layer with business logic to accomplish what you need.
My suggestions would be keeping the combination of every IOT device + minute separate in a different sorted set. So every minute each device will have a different key, you will append minute 2020:06:06:21:21 to the device identifier key, it will put at most 60 logs. you can check it with zcard
It would be something like this;
127.0.0.1:6379> zadd device:1:2020:06:06:21:21 1591442137 my-iot-payload
(integer) 1
127.0.0.1:6379> zadd device:1:2020:06:06:21:21 1591442138 my-iot-payload-another
(integer) 1
127.0.0.1:6379> zadd device:1:2020:06:06:21:21 1591442138 my-iot-payload-yet-another
(integer) 1
127.0.0.1:6379> zrange device:1:2020:06:06:21:21 0 -1
1) "my-iot-payload"
2) "my-iot-payload-another"
3) "my-iot-payload-yet-another"
127.0.0.1:6379>
In your application layer;
Every minute for every device you check for the sorted sets (I know you said 5 seconds but if you want to do it you need a modulo way to separate them in 5 seconds interval keys instead of minute one)
You have the list of devices(maybe in your database table), you know what time is it(convert to redis key)
Get minute/device separated keys with zrange(withscores option) to make calculations and validations at application level for each device and for that exact minute.
If they pass then save into your PostgreSQL database(delete sorted set key or execute expire whenever you add new element with zadd).
If they fail, that's totally up to you. You have minute separated logs for each device, you may delete it or parse them partially to save it.

best solution for make a ranged search on 2( or N) sorted set based on their score on Redis

i have some index(sorted set) containing key name sorted with timestamp as score, these index are for searching purpose , for example one index apple and one index red , apple contain all key name referencing an apple and red all key referencing a red thing.
All this is sorted with the timestamp of the creation of the main key, so i want to do search with that.
For one fild it's not a problem , with pagination i do zrange on apple for example to get all apple within range of pagination sorted by date, but my problem are when i want to combine 2 field.
For example if want all red apple, i can do it sure, but i must use a zunionstore and zrange(too long) or get all of the 2 index to perform a filter based on date, and i search the fastest solution to do that.
thank you for reading :)
The approach you described - ZUNIONSTORE followed by a ZRANGE is the most efficient within Redis core. Alternatively, you could use RediSearch for robuster indexing and searching abilities.

SAS: alternatives to First. and Last. variables when data can not be sorted?

Please help me with the following SAS problem. I need to transform my data set from "original" to "new" as shown in the picture. Because the "priority" variable can not be sorted, it seems that first. and last. variables would not work here, no? The goal is to have each sequence of priorities represent one entry in the "new" dataset.
Thank you!
p.s. I did not know how to create a table in this post so I just took a snapshot of the screen.
Seems fairly straightforward to me. Just create a batch ID.
data have_pret;
set have;
by subject;
if first.subject then batchID=0;
if priority=1 then batchID+1;
run;
Then you can transpose by subject/batchID. This assumes priority always starts at 1 - if it starts at > 1 sometimes, you may want to adjust your logic and keep track of prior value of priority.

Windows Azure Paging Large Datasets Solution

I'm using Windows Azure Table Storage to store millions of entities, however I'm trying to figure out the best solution that easily allows for two things:
1) a search on an entity, will retrieve that entity and at least (pageSize) number of entities either side of that entity
2) if there are more entities beyond (pageSize) number of entities either side of that entity, then page next or page previous links are shown, this will continue until either the start or end is reached.
3) the order is reverse chronological order
I've decided that the PartitionKey will be the Title provided by the user as each container is unique in the system. The RowKey is Steve Marx's lexiographical algorithm:
http://blog.smarx.com/posts/using-numbers-as-keys-in-windows-azure
which when converted to javascript instead of c# looks like this:
pad(new Date(100000000 * 86400000).getTime() - new Date().getTime(), 19) + "_" + uuid()
uuid() is a javascript function that returns a guid and pad adds zeros up to 19 chars in length. So records in the system look something like this:
PK RK
TEST 0008638662595845431_ecf134e4-b10d-47e8-91f2-4de9c4d64388
TEST 0008638662595845432_ae7bb505-8594-43bc-80b7-6bd34bb9541b
TEST 0008638662595845433_d527d215-03a5-4e46-8a54-10027b8e23f8
TEST 0008638662595845434_a2ebc3f4-67fe-43e2-becd-eaa41a4132e2
This pattern allows for every new entity inserted to be at the top of the list which satisfies point number 3 above.
With a nice way of adding new records in the system I thought then I would create a mechanism that looks at the first half of the RowKey i.e. 0008638662595845431_ part and does a greater than or less than comparison depending on which direction of the already found item. In other words to get the row immediately before 0008638662595845431 I would do a query like so:
var tableService = azure.createTableService();
var minPossibleDateTimeNumber = pad(new Date(-100000000*86400000).getTime() - new Date().getTime(), 19);
tableService.getTable('testTable', function (error) {
if (error === null) {
var query = azure.TableQuery
.select()
.from('testTable')
.where('PartitionKey eq ?', 'TEST')
.and('RowKey gt ?', minPossibleDateTimeNumber + '_')
.and('RowKey lt ?', '0008638662595845431_')
.and('Deleted eq ?', 'false');
If the results returned are greater than 1000 and azure gives me a continuation token, then I thought I would remember the last items RowKey i.e. the number part 0008638662595845431. So now the next query will have the remembered value as the starting value etc.
I am using Windows Azure Node.Js SDK and language is javascript.
Can anybody see gotcha's or problems with this approach?
I do not see how this can work effectively and efficiently, especially to get the rows for a previous page.
To be efficient, the prefix of your “key” needs to be a serially incrementing or decrementing value, instead of being based on a timestamp. A timestamp generated value would have duplicates as well as holes, making mapping page size to row count at best inefficient and at worst difficult to determine.
Also, this potential algorithm is dependent on a single partition key, destroying table scalability.
The challenge here would be to have a method of generating a serially incremented key. One solution is to use a SQL database and performing an atomic update on a single row, such that an incrementing or decrementing value is produced in sequence. Something like UPDATE … SET X = X + 1 and return X. Maybe using a stored procedure.
So the key could be a zero left padded serially generated number. Split such that say the first N digits of the number is the partition key and remaining M digits are the row key.
For example
PKey RKey
00001 10321
00001 10322
….
00954 98912
Now, since the rows are in sequence it is possible to write a query with the exact key range for the page size.
Caveat. There is a small risk of a failure occurring between generating a serial key and writing to table storage. In which case, there may be holes in the table. However, your paging algorithm should be able to detect and work around such instances quite easily by specify a page size slightly larger than necessary or by retrying with an adjusted range.

Redis: possible to expire an element in an array or sorted set?

Is it currently only possible to expire an entire key/value pair? What if I want to add values to a List type structure and have them get auto removed 1 hour after insertion. Is that currently possible, or would it require running a cron job to do the purging manually?
There is a common pattern that solves this problem quite well.
Use sorted sets, and use a timestamp as the score. It's then trivial to delete items by score range, which could be done periodically, or only on every write, with reads always ignoring the out of range elements, by reading only a range of scores.
More here: https://groups.google.com/forum/#!topic/redis-db/rXXMCLNkNSs
Is it currently only possible to expire an entire key/value pair?
As far as I know, and also according to key commands and document about expiration, currently you can set expiration only to specific key and not to it's underlying data structure. However there is a discussion on google groups about this functionality with outlined alternative solutions.
I came upon a different method of handling this, don't know if it's helpful to any of you,but here goes:
The hash and the sorted set are linked by a guid.
I have a hash that is set to expire in 'x' seconds
I have a sorted set that is used for ranged queries
The data for both is added in a transaction, so if one fails, they both fail.
Upon a ranged query, use 'EXISTS' to see if the hashed value exists as the results are iterated over
If it does not exist, it has expired, so delete the item from the sorted set
What about creating two seperate sorted sets?
Main sorted set which is key = value.
Expire sorted set which is key = expire_timestamp.
If you only want to expire a single score you can set as key:unique_id = expire_timestamp.
With mercy of zrangebyscore we can get expired keys. Then all we need to do is check periodically and zrem.
If you only want to expire a single score: ​zincrby -1.

Resources