Laravel 5.3 and Redis (predis) - autoincrement hash and delete hash `row` - laravel-5

I've been flirting with Redis for a while now.
I've watched these series some time ago and they were awesome. I've been through some of the documentation and the mentioning of the Time complexity of the queries blew me away, this is something that's rarely mentioned in web materials but is of huge importance for app building.
Anyhow I'm trying to make my app use the Redis on the consumer end so the users can fetch the data as fast as possible.
So I'm trying to save some objects to hash as:
$redis->hmset("taxi_car", array(
"brand" => "Toyota",
"model" => "Yaris",
"license number" => "RO-01-PHP",
"year of fabrication" => 2010,
"nr_stats" => 0)
as found here and this works nicely.
However I can't find a way to delete the whole entry anywhere.
Did I get this hash thing wrong?
Following this example I would like to delete the entry with given licence number. All I could find is how to delete the licence number from the object:
$redis->hdel("taxi_car", "license number");
and can't figure out how to delete the whole hash row (please do correct with proper word for row here).
Another problem here is that it seems this only allows me to save a single taxi_car in the Redis. How do I set the UUID so I can have multiple Taxi cars?
I'm going to play with this a bit, any help is welcome. Thanks!

To delete a key of any type, Hash included, call the Redis DEL command.
To have multiple keys, give them different names, e.g. taxi_car:1, taxi_car:2 etc.

Related

How to create unique ID in format xx-123 on rails

is it possible to create some unique ID for articles on rails?
For example, first article will get ID - aa-001,
second - aa-002
...
article #999 - aa-999,
article #1000 - ab-001 and so on?
Thanks in advance for your help!
The following method gives the next id in the sequence, given the one before:
def next_id(id, limit = 3, seperator = '-')
if id[/[0-9]+\z/] == ?9 * limit
"#{id[/\A[a-z]+/i].next}#{seperator}#{?0 * (limit - 1)}1"
else
id.next
end
end
> next_id("aa-009")
=> "aa-010"
> next_id("aa-999")
=> "ab-001"
The limit parameter specifies the number of digits. You can use as many prefix characters as you want.
Which means you could use it like this in your application:
> Post.last.special_id
=> "bc-999"
next_id(Post.last.special_id)
=> "bd-001"
However, I'm not sure I'd advice you to do it like this. Databases have smart methods to avoid race conditions for creating ids when entries are created concurrently. In Postgres, for example, it doesn't guarantee gapless ids.
This approach has no such mechanism, which could potentially lead to race conditions. However, if this is extremely unlikely to happen such in a case where you are the only one writing articles, you could do it anyway. I'm not exactly sure what you want to use this for, but you might want to look into to_param.
You may want to look into the FriendlyId gem. There’s also a Railscast on this topic which covers a manual approach as well as the usage of FriendlyId.

How do I find and remove duplicate mongo documents with ruby

I have a collection in Mongo with duplicates on a specific key that I need to remove all but one of. The Map Reduce solutions don't seem to make it clear how to remove all but one of the duplicates. I am using Ruby, how can I do this in a somewhat efficient way? My current solution is unbelievably slow!
I currently just iterate over an array of the duplicate keys and delete the first document that is returned but this only works if there are at most 1 duplicate document for each key and it is really slow.
dupes.each do |key|
$mongodb.collection("some_collection").remove($mongodb.collection("some_collection").find({key: key}).first)
end
I think you should use the MongoDB ensureIndex() to remove the duplicates. For instance, in your case, you want to drop the duplicate documents give the key duplicate_key, you can do
db.duplicate_collection.ensureIndex({'duplicate_key' : 1},{unique: true, dropDups: true})
where duplicate_collection is the collection where your duplicate documents are. This operation will only preserve single document if there are duplicate documents give a particular key.
After the operation, if you think you want to remove the index, just do the dropIndex operation. For details, you can search the mongodb documentation.
A lot of solutions suggest Map Reduce (which is fast and fine) but I implemented a solution in Ruby that seems pretty fast as well and makes it easy to leave the one document from each duplicate set.
Basically you find all your duplicate keys by adding them to a hash and any time you find a duplicate key in the collection you add the id of that document to an array which you will use in a bulk removal at the end.
all_keys = {}
dupes = []
dupe_key = "some_key"
$mongodb.collection("some_collection").find.each do |doc|
all_keys[doc[dupe_key]].present? ? dupes << doc["_id"] : asins[doc[dupe_key]] = 1
end
$mongodb.collection("some_collection").remove({_id: {"$in" => dupes } })
The only issue with this method is that it potentially won't work if the total list of keys/dupe ids can't be stored in memory. The map reduce solution would probably be best at that point.

Modeling data in Redis

I am building a system that keeps track of many counters in real time in Redis. Each counter is basically the impression, conversion details for ad keywords shown on a specific url.
ie. if 10 keywords are shown on a specific url, I need to update a count for each of those keywords for both impressions and conversions. And on each impression of a url, possibly a different set of 10 keywords can be shown.
ie. the basic data model I need is something like
> url=>
k1 =>
impression => 2
conversion => 1
k2 =><br>
impression => 100
conversion => 8
.
.
k100 (max around 100)</li>
I understand Redis doesnt have nested hashes so I cant store a 2 level hash as I have shown above.
What is the best way to solve this problem?
I thought of combining k1-impression and k1 conversion and making it one single field
ie like
url =>
k1-impression => 100
k1-conversion => 3
.<br>
. so on</li>
But the problem is the lengths of 'k1', 'k2' etc is significant ( 120-150 bytes) and I dont want to replicate that data, if possible, to save on memory.
How would I go about solving this problem?
Any help will be appreciated.
If your keywords are of significant enough length that you're worried about it, you should normalize them. Make a hash of keyword -> id, and a hash of id -> keyword, for encoding and decoding them. Then you can have per-url hashes of the form url => {kw_id:impressions => 1123, kw_id:conversions => 28}. This will also serve you well when you start needing to make indexes of the key words, which you will as soon as you get a requirement to show the top 10 best performing key words across all urls, for example.

RhoMobile 13,000 inserts causing issues due to time

I have a problem (due to time) when inserting around 13,000 records into the devices database.
Is there any way to optimize this? Is it possible to put these all into one transaction (as I believe that it is currently creating one transaction per insert (which apparently has a diabolical effect on speed)).
Currently this takes around 10 minutes, this includes converting CSV to a hash (this doesn't seem to be the bottleneck).
Stupidly I am not using RhoSync...
Thanks
Set up a transaction around the inserts and then only commit at the end.
From their FAQ.
http://docs.rhomobile.com/faq#how-can-i-seed-a-large-amount-of-data-into-my-application-with-rhom
db = ::Rho::RHO.get_src_db('Model')
db.start_transaction
begin
items.each do |item|
# create hash of attribute/value pairs
data = {
:field1 => item['value1'],
:field2 => item['value2']
}
# Creates a new Model object and saves it
new_item = Model.create(data)
end
db.commit
rescue
db.rollback
end
I've found this technique to be a tremendous speed up.
Use Fixed schema rather then property bag, and you can use one transaction (see below link for how).
http://docs.rhomobile.com/rhodes/rhom#perfomance-tips.
This question was answer by someone else, on google groups (HAYAKAWA Takashi)

data structure to support lookup based on full key or part of key

I need to be able to lookup based on the full key or part of the key..
e.g. I might store keys like 10,20,30,40 11,12,30,40, 12,20,30,40
I want to be able to search for 10,20,30,40 or 20,30,40
What is the best data structure for achieving this..best for time.
our programming language is Java..any pointers for open source projects will be appreciated..
Thanks in advance..
If those were the actual numbers I'd be working with, I'd use an array where a given index contains an array of all records that contain the index. If the actual numbers were larger, I'd use a hash table employed the same way.
So the structure would look like (empty indexes elided, in the case of the array implementation):
10 => ((10,20,30,40)),
11 => ((11,12,30,40)),
12 => ((11,12,30,40), (12,20,30,40)),
20 => ((10,20,30,40), (12,20,30,40)),
30 => ((10,20,30,40), (11,12,30,40), (12,20,30,40)),
40 => ((10,20,30,40), (11,12,30,40), (12,20,30,40)),
It's not clear to me whether your searches are inclusive (OR-based) or exclusive (AND-based), but either way you look up the record groups for each element of the search set; for the inclusive search you find their union, and for the exclusive search you find their intersection.
Since you seen to care about retrieval time over other concerns (such as space), I suggest you use a hashtable and you enter your items several times, once per subkey. So you'd put("10,20,30,40",mydata), then put("20,30,40",mydata) and so on (of course this would be a method, you're not going to manually call put so many times).
Use a tree structure. Here is an open source project that might help ... written in Java :-)
http://suggesttree.sourceforge.net/

Resources