How can i sort my redis cache.
The data:
SADD key '{"id":250,"store_id":3,"url_path":"\/blog\/testblog123123",
"status":"Published","title":"TestBlog123123",
'"description":"","image":null,"description_2":"",
"date":"2017-04-17","blogcategory":"Category 3"}'
Next I need to sort my KEY by id.
This works:
SORT key BY *->id DESC
... but only when:
id > 10
because redis sort ONLY first number.
Maybe I should use another command to add, but I need JSON format.
You could use a sorted set from scratch?
ZADD key 250 '{"id":250,"store_id":3,"url_path":"\/blog\/testblog123123",
"status":"Published","title":"TestBlog123123",
'"description":"","image":null,"description_2":"",
"date":"2017-04-17","blogcategory":"Category 3"}'
I am also not sure why to use Set here at all, because uniqueness of a set element will only be guaranteed for the whole JSON string. And if your JSON serializer changes order of two fields in JSON dict, it will produce another string which is unique again and you'll end up with a dangling old string. Same applies, if you add more fields to the string.
Related
In redis I'm planning to store key as a unique string and value will be a list.
I have a use case where I need to do 2 things.
First, I need to get all the values associated with a key by providing the key as input.
Second, I want to get all the keys associated with a value by providing one of the value in the values list.
Second part is where I need the advice, how we can achive this ?
I cannot get all the keys or key value pair and loop through because I will have millions of entries in Redis.
As mentioned in the comment above the retrieving of all keys with associated value at will probably sometimes create a performance issue as this will be a run through large entries.As also suggested in the official documentation about retrieving data from the memory caches you can try and use the following Redis command to get the value and see if that is what can solve your purpose.
GET
MGET
I have fields below in dynamo dB table
event_on -- string type
user_id -- number type
event name -- string type
Since this table may have multiple records for user_id and event_on is the single field which can be unique so I made it primary key and user_id as sort key
Now I want to delete the all records of a user, so My code is
response = dynamodb.delete_item(
TableName=events,
Key={
"user_id": {"N": str(userId)}
})
It throwing error
Exception occured An error occurred (ValidationException) when calling
the DeleteItem operation: The provided key element does not match the
schema
also is there anyway to delete with range
Can someone suggest me what should I have do with dynamodb table structure to make this code work
Thanks,
It sounds like you've modeled your data using a composite primary key, which means you have both a partition key and a sort key. Here's an example of what that looks like with some sample data.
In DynamoDB, the most efficient way to access items (aka "rows" in RDBMS language) is by specifying either the full primary key (getItem) or the partition key (query). If you want to search by any other attribute, you'll need to use the scan operation. Be very careful with scan, since it can be a costly way (both in performance and money) to access your data.
When it comes to deletion, you have a few options.
deleteItem - Deletes a single item in a table by primary key.
batchWriteItem - The BatchWriteItem operation puts or deletes multiple items in one or more tables. A single call to BatchWriteItem can write up to 16 MB of data, which can comprise as many as 25 put or delete requests
TimeToLive - You can utilize DynamoDBs Time To Live (TTL) feature to delete items you no longer need. Keep in mind that TTL only marks your items for deletion and actual deletion could take up to 48 hours.
In order to effectively use any of these options, you'll first need to identify which items you want to delete. Because you want to fetch using the value of the sort key alone, you have two options;
Use scan to find the items of interest. This is not ideal but is an option if you cannot change your data model.
Create a global secondary index (GSI) that swaps your partition key and sort key values. This pattern is called an inverted index. This would allow you to identify all items with a given user_id.
If you choose option 2, your data would look like this
This would allow you to fetch all item for a given user, which you could then delete using one of the methods I outlined above.
As you can see here, delete_item needs the primary key and not the sort key. You would have to do a full scan, and delete everything that contains the given sort key.
If you are created a DynamoDB table by the Primary key and sort key, you should provide both values to remove items from that table.
If the sort key was not added to the primary key on the table creation process, the record can be removed by the Primary key.
How I solved it.
Actually, I tried to not add the sort key when created the table. And I'm using indexes for sorting and getting items.
I currently have a scenario where we are using REDIS to store string field-value pairs within a hashed set HSET.
The original reasoning behind using hashed sets instead of just sets is ease of retrieving the records using HSCAN inside a GUI Search Bar as opposed to just SCAN because it's easier to get the length of a hash to use in the COUNT field.
I read in the Redis documentation that both GET and HGET commands execute with O(1) time complexity, but a member of my team thinks that if I store all the values inside a single key then it basically returns the entire hash during HGET instead of the singular field-value that I need.
So for a made up but similar example:
I have a Redis instance with a single Hashed Set called users.
The hashed set has 150,000 field:value pairs of username:email
If when I execute hget users coolguy, is the entire hash getting returned or just the email for user coolguy?
First of all, HSET is not a hash set, it creates a hash table. The mechanism behind the hash table and set (which is indeed a hash set) in redis is the same, the difference is mainly that the hash table has values.
To answer your question:
If when I execute hget users coolguy, is the entire hash getting returned or just the email for user coolguy?
Just the email for that user. You can also use HMGET to get the emails of multiple users at once. It's O(1) for each user you fetch, or O(n) for n users.
I'm not expert in redis so does anyone knows how can I create a key that can have subkeys, and these subkeys must have an expire time each one.
Is this possible in Redis??
It would be something like this:
[:keyX]
|
V
[:keyZ][:value]
|
V
EXPIRE keyZ 100
PS. the app is in ruby.
Thanks!
Redis does not have nested keys, although the Hash data type could work for you. Also, Redis expiry is only for keys - Hash fields, List elements or Sorted and regular Sets members can not be assigned with an independent TTL.
Your question does not detail why you're looking to do that (i.e. store keys under a "root" key and have each key expire on its own). You can get the per-key expiration effect by using plain ol' regular keys, or use a Hash to aggregate all the fields under one common key - but not both at the same time.
That said, if you really need this sort of functionality you can always try implementing it yourself - see here for a possible direction: Redis: To set timeout for a key value pair in Set
What's a good strategy for generating auto-incrementing keys in LevelDB? My goal is to be able to iterate over the keys in the order that they were inserted.
two methods:
use the default comparator, but use a function to convert the index key '1' to something like '000000001', convert '20' to '000000020', so leveldb will place them near each other;
self define a new comparator, which convert the key from type string to type integer, then you can compare the integer.
with any of the above 2 methods, you need to store a key-value pair in the leveldb: current_id ----> integer, or you can store the current id in a new file using mmap.
then, with yourself defined Add() function, after you get the current id from key current_id you can insert a new key-value pair: id ----> value, then you can update the current_id to plus one.
Since a LevelDB instance can only be accessed from one application at a time, you might as well use a 64-bit long and increment it in the application. When opening the DB (and before allowing any writes), to find the last inserted key you can use the SeekToLast() method of the Iterator.
As I just pointed out in a question on integer keys, if you want to use binary integers you need to create a custom Comparator for the database, otherwise you don't get them in ascending binary order. It's not hard but you may have overlooked the need.
I'm not quite sure what you're asking. If the only data you are adding is keys which are supposed to record an entry as a log then yes, just use an integer key.
However, if you are inserting keys you are going to search for some other reason PLUS you want to later iterate them in insertion order, it gets a bit more complex.
Basically you want to insert two keys for each key value, using a prefix to determine whether keys are "value keys" or "ordering keys". e.g., say you have Frank, John, Sally and Amy as keys and use prefix ~N for Name keys and ~I for Iterator keys.
The database looks like the following, note that the "Iterator keys" don't have a value associated with them as we can just get the names out of the key. I've shown it as if you used a string of two digits for the number, rather than using an integer value and needing a special Comparator.
~I00Frank
~I01John
~I02Sally
~I03Amy
~NAmy => Amy's details
~NFrank => frank's details
~NJohn => John's details
~NSally => Sally's details