Currently I am caching data from active record to redis by doing following:
redis.rb
$redis = Redis::Namespace.new("bookstore", :redis => Redis.new)
authors_helper.rb
def fetch_authors
authors = $redis.get('authors')
if authors.nil?
authors = Author.all.to_json
$redis.set("authors", authors).to_json
$redis.expire("authors", 5.hour.to_i)
end
JSON.load authors
end
So currently I am using basic set and get to cache and read data from redis.
I want to use hmset instead of just set. The redis way to to this job is as follows:
(Just an example)
HMSET user:1001 name "Mary Jones" password "hidden" email "mjones#example.com"
The authors table in my app consists of following field: id ,name, created_at, updated_at
What is the ruby way to use hmset so that I can cache authors data in a redis hash?
I don't think you can save all authors this way. This is because a hash can store only one value for each key. So name and created_at cannot be keys, because all authors need to store their own values for these keys, but you can use each key only once.
If you're using Ruby on Rails, using Rails.cache is preferred - this way you don't have to worry about the way Rails stores the object in Redis.
However, if you want to use hmset for some reason, I believe the best you can do is something like this:
authors = Author.all.flat_map { |author| [author.id.to_s, author.attributes.to_json] }
$redis.hmset("authors", *authors_data)
The first line will return something like this:
['1', '{"name": "Mary Jones", "email": "m#example.com"}', '2', '{"name": "Another name", "email": "e#example.com"']
hmset command does not accept array, but a flat list of attributes, that's why in the 2nd line you need to pass *authors_data to the function.
Then, internally it will look like this:
{
'1' => '{"name": "Mary Jones", "email": "m#example.com"}',
'2' => '{"name": "Another name", "email": "e#example.com"'
}
Later you can do $redis.hmget("authors", '1') which will return a String '{"name": "Mary Jones", "email": "m#example.com"}'
Related
Say I have a users table with an embedded followers array property.
{
"followers": [
"bar"
] ,
"name": "foo" ,
}
In rethinkDb, what's the best way to add a username to that followers property. i.e. add a follower to a user.
In mongodb I would use the addToSet command to add a unique value to the embedded array. Should I use the merge command?
RethinkDB has a setInsert command. You can write table.get(ID).update({followers: r.row('followers').setInsert(FOLLOWER)}).
I'm new to couchdb and stuck with one scenario. I have the following data.
{
_id:"1",
firstName: "John",
lastName: "John"
}
I am writing a view to return documents where firstName="John" or lastName="John" and have the following map. So, the query will be /view/byName?key="John"
function(doc){emit(doc.firstName, doc);emit(doc.lastName, doc);}
I can filter out the duplicates in reduce, however I am searching for a way to filter the documents in map.
If by filter you mean get all unique values then reduce is the right way to do it. Couchdb the definitive guide suggests this as well. Just create a dummy reduce
function(key,values){return true;}
and call your view with ?group=true and you will have all the unique results.
If I understand you correctly, you want to have both documents in case of "John Smith" and "John Black", but "John John" should be reported once.
Couch gives you the unique set of keys with respect to keys ("John" in your example). Just emit the pair of name and document id ([doc.firstName, doc._id] and [doc.lastName, doc._id]) and reduce will do what you want.
["John", "ID_OF_SMITH"] != ["John", "ID_OF_BLACK"]
["John", "ID_OF_JOHNJOHN"] == ["John", "ID_OF_JOHNJOHN"]
I'm trying to cache my tweets and show that based on my keyword save. However, as tweets grow overtime I need to paginate them.
I'm using Ruby and Mongoid which this is what I have come up so far.
class SavedTweet
include Mongoid::Document
field :saved_id, :type => String
field :slug, :type => String
field :tweets, :type => Array
end
And the tweets array would be like this
{id: "id", text: "text", created_at: "created_at"}
So it's like a bucket for each keyword that you can save. My first problem would be that Mongodb cannot sort the second level of document which in this case it's tweets and that'd make pagination much harder because I cannot use skip and limit. I will have to load the whole tweets and put that in the cache and paginate from that.
The question is how should I model my problem to make it paginable out of Mongodb and not in the memory. I'm assuming that doing it in Mongodb would be faster. Right now, I'm in the early stage of my application so it's easier to change the model than later. If you guys have any suggestions or opinion I'm really appreciated.
An option could be to save tweets in a different collection and link them with your SavedTweet class. It will be easy to query and you could use skip and limit without problems.
{id: "id", text: "text", created_at: "created_at", saved_tweet:"_id"}
EDIT: a better explanation, with two aditional options
As far I see, you have three options, if I understand correctly your requirements:
Use the same schema that you are already using. You would have two problems: you cannot use skip and limit with an usual query and you have a limit of 16 MB per document. I think, the first one could be resolved with an Aggregation Framework query ($unwind, $skip and $limit could be helpful). The second one could be a problem if you have a lot of tweet documents in the array, because one document cannot have more than 16MB of size.
Use two collections to store your tweets. One collection would have the same structure that you already have. For example:
{
save_id:"23232",
slug:"adkj"
}
And the other collection would have one document per tweet.
{
id: "id",
text: "text",
created_at: "created_at",
saved_tweet:"_id"
}
With saved_tweet field you are linking saved_tweets with tweet with a 1 to N relation. So with this way, you can carry out queries over tweet collection and still be able to use limit and skip operators..
Save all info in the same document. If your saved_tweet collection only have those fields, you can save all info in a whole document (one document for each tweet). Something like this:
{
save_id:"23232",
slug:"adkj"
tweet:
{
tweet_id: "id",
text: "text",
created_at: "created_at"
}
}
Whit this solution you are duplicating fields, because *save_id* and slug would be the same in other documents of the same saved_tweet, but I could be an option if you have a little quantity of fields and that fields are not subdocuments or arrays.
I hope it is clear now.
I am using mailgun for sending email. It has api for add “Unsubscribe me” feature. I am using it in my rails app.
Using this command, i get list of all unsubscribed users i.e. entries in unsubscribes table of mailgun.
RestClient.get "https://api:key-3ax6xnjp29jd6fds4gc373sgvjxteol0" "#api.mailgun.net/v2/samples.mailgun.org/unsubscribes"
I am storing its output in #unsubscriber. So my controller has:
#unsubscribers = RestClient.get "https://api:key-3ax6xnjp29jd6fds4gc373sgvjxteol0" "#api.mailgun.net/v2/samples.mailgun.org/unsubscribes"
When i display the output in view, <%= #unsubscribers %> i get string:
{
"total_count": 1,
"items": [{
"created_at": "Sun, 11 Aug 2013 08:07:22 GMT",
"tag": "*",
"id": "sdfsdfw12423535456",
"address": "xyz#abc.com"
}]
}
As I want to delete unsubscribed emails from my database, I want only emails in #unsubscribers. But it contains whole string.
I am not getting how to extract email from above string so that i can have list of emails in #unsubscribers and i can delete them from my app.
Can anybody help me?
If you are trying to access the address of the first subscriber:
#unsubscriber['items'].first['address']
Safer code:
((#unsubscriber['items'] || []).first || {} )['address']
If you are trying to collect all the addresses:
(#unsubscriber['items'] || []).map{|s| ['address']}
#unsubscriber is a hash. items is an array of hashes. You need to get the first item using [0]. Then you need the address value
#unsubscriber["items"][0]["address"]
I am using Codeigniter and Alex Bilbie's MongoDB library.
In my API that I am developing users can upload images and other users can comment on them.
I have chosen to include the comments as sub documents to the images.
Each comment contains:
Fullname (of author)
Comment
Created_at
So in other words. The users full name is "hard coded" into each comment so if they
later decides to change their names I have a problem.
I read that I can use atomic updates to update all occurrences of the name (like in comments) but how can I do this using Alex´s library? Can I update all places where the name is wrong?
UPDATE
This is how the image document looks like with the comments.
I think that it is pretty strange that MongoDB encourage the use of subdocuments but then does not include a way to update multiple items in an array.
{
"_id": ObjectId("4e9ead773dc793dc01020000"),
"description": "An image",
"category": "accident",
"comments": [
{
"id": ObjectId("4e96bd063dc7937202000000"),
"fullname": "James Bond",
"comment": "This is a comment.",
"created_at": "2011-10-19 13:02:40"
}
],
"created_at": "2011-10-19 12:59:03"
}
Thankful for all help!
I am not familiar with codeignitor, but mb mongodb shell syntax will help you:
db.comments.update( {"Fullname":"Andrew Orsich"},
{ $set : { Fullname: "New name"} }, false, true )
Last true flag indicate that you want update multiple documents. So it is possible to update all comments in one update operation.
BTW: denormalazing (not 'hard coding') data in mongodb and nosql in general is usual operation. Also operation that require update a lot of documents usually work async. But it is up to you.
Update:
db.comments.update( {"comments.Fullname":"Andrew Orsich"},
{ $set : { comments.$.Fullname: "New name"} }, false, true )
But, above query will update full name in first comment on nested array. If you need to affect changes to more than one array element you will need to use multiple update statements.