I would like to reset the value of a REDIS counter to 0 in a Rails 4 app.
I use hincrby to increment counters
$redis.hincrby("user:likes", "key", 1)
I can't delete the key with hdel http://redis.io/commands/hdel because I need to get the key often.
GETSET is atomic and could do the job http://redis.io/commands/getset, as in the example
GETSET mycounter "0"
But since I use hashes I need to use HSET http://redis.io/commands/hset
$redis.hset("user:likes", "key", "0")
It's not specified if hset is atomic, anyone used hset to reset redis counters to 0? If it's not a good option to reset a counter to 0, any idea how to do it?
It is atomic, so if you run $redis.hset("user:likes", "key", "0"), it doesn't affect other fields inside the "user:likes" hash besides the field: "key"
Related
I have some code to insert a map into Redis using the HSET command:
prefix := "accounts|asdfas"
data := make(map[string]string)
if _, err := conn.Do("HSET", redis.Args{}.Add(prefix).AddFlat(data)...); err != nil {
return err
}
If data has values in it then this will work but if data is empty then it will issue the following error:
ERR wrong number of arguments for 'hset' command
It seems that this is the result of the AddFlat function converting the map to an interleaved list of keys and their associated values. It makes sense that this wouldn't work when the map is empty but I'm not sure how to do deal with it. I'd rather not add an empty value to map but that's about all I can think to do. Is there a way to handle this that's more inline with how things are supposed to be done on Redis?
As a general rule of thumb, Redis doesn't allow and never keeps an empty data structure around (there is one exception to this tho).
Here's an example:
> HSET members foo bar
(integer) 1
> EXISTS members
(integer) 1
> HDEL members foo
(integer) 1
> EXISTS members
(integer) 0
As a result if you want to keep your data structures around, you have to at least have one member inside them. You can add a dummy item inside the Hash
and ignore it in your application logic but it may not work well with other data structures like List.
I need to store strings and associate a unique integer to each one. The integer must be as short/small as possible . Is it possible to do that in Redis? Basically I need something like SADD but instead to return the number of elements in the set I need it to return the index of the element inserted(newly stored or existing).
Pseudo code:
// if mystring already exists in myset it returns its index
// otherwise stores it and returns its index.
index := storeOrReturnIndex(myset, mystring)
Would using a hashmap cover what you are looking for?
> HSET hashmap 0 "first string"
(integer) 1
> HSET hashmap 1 "second string"
(integer) 1
> HSET hashmap 2 "third string"
(integer) 1
> HGET hashmap 1
"second string"
> HLEN hashmap
3
You can store the last modified index in a key with:
> SET last_modified 1
Then retrieve it with:
> GET last_modified
You can use the Redis INCR command to atomically acquire a new, unique index.
Pattern: Counter
The counter pattern is the most obvious thing you can do with Redis atomic increment operations. The idea is simply send an INCR command to Redis every time an operation occurs. For instance in a web application we may want to know how many page views this user did every day of the year.
To do so the web application may simply increment a key every time the user performs a page view, creating the key name concatenating the User ID and a string representing the current date.
This simple pattern can be extended in many ways:
So use INCR to get the next unique, smallest index in an atomic way wherever you want to store a new item (URL). Then you can use HSET to store the index associated with your item, and HGET to get the associated index for an item.
is there a good way in redis to get keys in a hash sorted by values? I've looked at the documentation and haven't found a straightforward way.
Also could someone please explain how sorting is achieved in redis, and what this documentation is trying to say?
I have a very simple hash structure which is something like this:
"salaries" - "employee_1" - "salary_amount"
I'd appreciate a detailed explanation.
You can achieve it by sorting a SET by one of your HASH fields. So you should create an indices SET of all of your hashes, and use the BY option.
Also, you can use DESC option to get the results sorted from high to low.
e.g.
localhost:6379> sadd indices h1 h2 h3 h4
(integer) 4
localhost:6379> hset h1 score 3
(integer) 1
localhost:6379> hset h2 score 2
(integer) 1
localhost:6379> hset h3 score 5
(integer) 1
localhost:6379> hset h4 score 1
(integer) 1
localhost:6379> sort indices by *->score
1) "h4"
2) "h2"
3) "h1"
4) "h3"
localhost:6379> sort indices by *->score desc
1) "h3"
2) "h1"
3) "h2"
4) "h4"
From SORT's documentation page:
Returns or stores the elements contained in the list, set or sorted set at key
So you can't really use it to sort the fields by their values in a Hash data structure. To achieve your goal you should either do the sorting in your application's code after getting the Hash's contents or use a Redis-embedded Lua script for that purpose.
Edit: After speaking with #OfirLuzon we realized that there is another, perhaps even preferable, approach which would be to use a more suitable data structure for this purpose. Instead of storing the salaries in a Hash, you should consider using a Sorted Set in which each member is an employee ID and the score is the relevant salary. This will give you ordering, ranges and paging for "free" :)
I want to store key-value pairs(T1,T2) in Redis. Both key and value are unique.
I want to be able to query on both key and value, i.e. HGET(Key) should return corresponding Value and HGET(Value) should return corresponding Key.
A trivial approach would be to create 2 Hashes in Redis (T1,T2) and (T2,T1) and then query on appropriate Hash. Problem with this approach is that insertion, update or deletion of pairs would need updates in both Hashes.
Is there a better way to serve my requirement...
If one of T1, T2 has an integer type you could use a combo like:
1->foo
2->bar
ZADD myset 1 foo
ZADD myset 2 bar
ZSCORE myset foo //returns 1.0 in O(n)
ZSCORE myset bar //return 2.0 in O(n)
ZRANGEBYSCORE myset 1 1 //returns "foo" in O(log(N)+M)
source
If this is not the case then it makes sense to maintain 2 separate hashes, preferably within a Lua script
Intuitively, hadoop is doing something like this to distribute keys to mappers, using python-esque pseudocode.
# data is a dict with many key-value pairs
keys = data.keys()
key_set_size = len(keys) / num_mappers
index = 0
mapper_keys = []
for i in range(num_mappers):
end_index = index + key_set_size
send_to_mapper(keys[int(index):int(end_index)], i)
index = end_index
# And something vaguely similar for the reducer (but not exactly).
It seems like somewhere hadoop knows the index of each key it is passing around, since it distributes them evenly among the mappers (or reducers). My question is: how can I access this index? I'm looking for a range of integers [0, n) mapping to all my n keys; this is what I mean by an "index".
I'm interested in the ability to get the index from within either the mapper or reducer.
After doing more research on this question, I don't believe it is possible to do exactly what I want. Hadoop does not seem to have such an index that is user-visible after all, although it does try to distribute work evenly among the mappers (so such an index is theoretically possible).
Actually, your reducer (each individual one) gets an array of items back that correspond to the reduce key. So do you want the offset of items within the reduce key in your reducer, or do you want the overall offset of the particular item in the global array of all lines being processed? To get an indeex in your mapper, you can simply prepend a line number to each line of the file before the file gets to the mapper. This will tell you the "global index". However keep in mind that with 1 000 000 items, item 662 345 could be processed before item 10 000.
If you are using the new MR API then the org.apache.hadoop.mapreduce.lib.partition.HashPartitioner is the default partitioner or else org.apache.hadoop.mapred.lib.HashPartitioner is the default partitioner. You can call the getPartition() on either of the HashPartitioner to get the partition number for the key (which you mentioned as index).
Note that the HashPartitioner class is only used to distribute the keys to the Reducer. When it comes to a mapper, each input split is processed by a map task and the keys are not distributed.
Here is the code from HashPartitioner for the getPartition(). You can write a simple Java program for the same.
public int getPartition(K key, V value, int numReduceTasks) {
return (key.hashCode() & Integer.MAX_VALUE) % numReduceTasks;
}
Edit: Including another way to get the index.
The following code from should also work. To be included in the map or the reduce function.
public void configure(JobConf job) {
partition = job.getInt( "mapred.task.partition", 0);
}