Does anyone know how to use the mahout library to provide friend(link) recommendations in a social network? Directed or undirected that could possibly have (only?) positive but also negative links.
Such as the ones found at Standford.
If I understand your question the answer is yes. I mined a bunch of "pinned people" from Pinterest and created a recommender using Mahout. It showed up some pretty interesting recs that convinced me it was working.
If you treat the action as "friending" then the data is: (user-doing-the-friending, user-friended) Use a boolean item based recommender with LLR as the similarity measure et voila.
Using negative links is tougher. The obvious thing is to filter an individual's recs by the negative links for a that user. Another might be to create "friending" recs and "hating" recs separately then filter an individual's friending recs by their hating recs.
BTW epinions data is often used as a base test set for the collaborative filtering code in Mahout.
Related
I'm attempting to write some code for item based collaborative filtering for product recommendations. The input has buyers as rows and products as columns, with a simple 0/1 flag to indicate whether or not a buyer has bought an item. The output is a list similar items for a given purchased, ranked by cosine similarities.
I am attempting to measure the accuracy of a few different implementations, but I am not sure of the best approach. Most of the literature I find mentions using some form of mean square error, but this really seems more applicable when your collaborative filtering algorithm predicts a rating (e.g. 4 out of 5 stars) instead of recommending which items a user will purchase.
One approach I was considering was as follows...
split data into training/holdout sets, train on training data
For each item (A) in the set, select data from the holdout set where users bought A
Determine which percentage of A-buyers bought one of the top 3 recommendations for A-buyers
The above seems kind of arbitrary, but I think it could be useful for comparing two different algorithms when trained on the same data.
Actually your approach is quiet similar with the literature but I think you should consider to use recall and precision as most of the papers do.
http://en.wikipedia.org/wiki/Precision_and_recall
Moreover if you will use Apache Mahout there is an implementation for recall and precision in this class; GenericRecommenderIRStatsEvaluator
Best way to test a recommender is always to manually verify that the results. However some kind of automatic verification is also good.
In the spirit of a recommendation system, you should split your data in time, and see if you algorithm can predict what future buys the user does. this should be done for all users.
Don't expect that it can predict everything, a 100% correctness is usually a sign of over-fitting.
I'm looking to write a basic recommender system in Objective-C and I'm looking for a basic algorithm for the job. Unfortunately off-the-shelf systems are off the table since none seem to be for Objective-C.
I'm going to have a database of items, each with tags (think movies with tags like "horror", "action", etc). Each item would have ~5 or so of these tags. When a user first uses the app their profile will be primed based on their input to a series of questions, associating some tags with their profile.
As the user continues to use the system and rate various items (on a hate/like/love basis) I'd like to adjust the weighting of the recommended tags based on that feedback. I'd also like to take in a few other properties of their ratings as their profile grows, like for example "the 80s" if this dealt with movies. Or maybe director, sticking with the movie theme.
I'm opting to avoid the normal (or at least popular) recommender systems where it looks for similar users to generate recommendations. This is going to have a large item database and minimal users to start.
Can anyone recommend a good starting point for an algorithm like this, I'd hate to reinvent the wheel, and there's a lot out there?
could you please refer the python-recsys: https://github.com/ocelma/python-recsys, this software use SVD algorithm, I think it is a base algorithm but effective enough. The required library is numpy and scipy, which are written in C, and wrapped by Python. I think it is easy to compile and port to objective-c
You can use item based recommendation which sounds perfect for your needs. You can later start to incorporate tags into the weighting but for now I'd suggest considering only the items.
You can learn a little more at http://www.cs.carleton.edu/cs_comps/0607/recommend/recommender/itembased.html There are a good few implementations of it on the net.
What you mentioned in your post would be called user based collaborative filtering.
I am building a Rails app that recommends tutors to students and vise versa. I need to match them based on multiple dimensions, such as their majors (Math, Biology etc.), experience (junior etc.), class (Math 201 etc.), preference (self-described keywords) and ratings.
I checked out some Rails collaborative recommendation engines (recommendable, recommendify) and Mahout. It seems that collaborative recommendation is not the best choice in my case, since I have much more structured data, which allows a more structured query. For example, I can have a recommendation logic for a student like:
if student looks for a Math tutor in Math 201:
if there's a tutor in Math major offering tutoring in Math 201 then return
else if there's a tutor in Math major then sort by experience then return
else if there's a tutor in quantitative major then sort by experience then return
...
My questions are:
What are the benefits of a collaborative recommendation algorithm given that my recommendation system will be preference-based?
If it does provide significant benefits, how I can combine it with a preference-based recommendation as mentioned above?
Since my approach will involve querying multiple tables, it might not be efficient. What should I do about this?
Thanks a lot.
It sounds like your measurement of compatibility could be profitably reformulated as a metric. What you should do is try to interpret your `columns' as being different components of the dimension of your data. The idea is that you ultimately should produce a binary function which returns a measurement of compatibility between students and tutors (and also students/students and tutors/tutors). The motivation for extending this metric to all types of data is that you can then use this idea to reformulate your matching criteria as a nearest-neighbor search:
http://en.wikipedia.org/wiki/Nearest_neighbor_search
There are plenty of data structures and solutions to this problem as it has been very well studied. For example, you could try out the following library which is often used with point cloud data:
http://www.cs.umd.edu/~mount/ANN/
To optimize things a bit, you could also try prefiltering your data by running principal component analysis on your data set. This would let you reduce the dimension of the space in which you do nearest neighbor searches, and usually has the added benefit of reducing some amount of noise.
http://en.wikipedia.org/wiki/Principal_component_analysis
Good luck!
Personally, I think collaborative filtering (cf) would work well for you. Note that a central idea of cf is serendipity. In other words, adding too many constraints might potentially result in a lukewarm recommendations to users. The whole point of cf is to provide exciting and relevant recommendations based on similar users. You need not impose such tight constraints.
If you might decide on implementing a custom cf algorithm, I would recommend reading this article published by Amazon [pdf], which discusses Amazon's recommendation system. Briefly, the algorithm they use is as follows:
for each item I1
for each customer C who bought I1
for each I2 bought by a customer
record purchase C{I1, I2}
for each item I2
calculate sim(I1, I2)
//this could use your own similarity measure, e.g., cosine based
//similarity, sim(A, B) = cos(A, B) = (A . B) / (|A| |B|) where A
//and B are vectors(items, or courses in your case) and the dimensions
//are customers
return table
Note that the creation of this table would be done offline. The online algorithm would be quick to return recommendations. Apparently, the recommendation quality is excellent.
In any case, if you want to get a better idea about cf in general (e.g., various cf strategies) and why it might be suited to you, go through that article (don't worry, it is very readable). Implementing a simple cf recommender is not difficult. Optimizations can be made later.
I have a database of about 700k users along with items they have watched/listened to/read/bought/etc.
I would like to build a recommendation engine that recommends new items based on what users with similar taste in things have enjoyed, as well as actually finding people the user might want to be friends with on a social network I'm building (similar to last.fm).
My requirements are as follows:
Majority of the "users" in my database aren't actually users of my website. They have been data mined from third-party sources. However, when recommending users, I would like to limit the search to people who are members of my website (while still taking advantage of the bigger data set).
I need to take multiple items into consideration. Not "people who like this one item you enjoyed...", but "people who like most of the items you enjoyed...".
I need to compute similarities between users and show them when viewing their profiles (taste-o-meter).
Some items are rated, others are not. Ratings are from 1-10, not boolean values. In most cases it would be possible to deduct a rating value from other stats if it's not present (e.g. if the user has favourited an item, but hasn't rated it, I could just assume a rating of 9).
It has to interact with Python code in one way or another. Preferably, it should use a seperate (possibly NoSQL) database and expose an API to use in my web back-end. The project I'm making uses Pyramid and SQLAlchemy.
I would like to take item genres into account.
I would like to display similar items on item pages based on both its genre (possibly tags) and what users who enjoyed the item liked (like Amazon's "people who bought this item" and Last.fm artist pages). Items from different genres should still be shown, but have a lower similarity value.
I would prefer a well-documented implementation of an algorithm with some examples.
Please don't give an answer like "use pysuggest or mahout", since those implement a plethora of algorithms and I'm looking for one that's most suitable for my data/use. I've been interested in Neo4j and how it all could be expressed as a graph of connections between users and items.
To determine similarity between users you can run cosine or pearson similarity (Found in Mahout and everywhere on the net really!) across the user vector. So your data representation should look something like
u1 [1,2,3,4,5,6]
u2 [35,24,3,4,5,6]
u1 [35,3,9,2,1,11]
In the point where you want to take multiple items into consideration you can use the above to determine how similar someones profiles are. The higher the correlation score the likelihood they have very similar items is. You can set a threshold so someone with .75 similarity has a similar set of items in their profile.
Where you are missing values you can of course make up your own values. I'd just keep them binary and try to blend the various different algorithms. That's called an ensemble.
Overall you are looking for something called item based collaborative filtering as the recommendation aspect of your set up and also used to identify similar items. It's a standard recommendation algorithm that does pretty much everything you've asked for.
When trying to find similar users you can perform some type of similarity metric across your user vectors.
Regarding Python, the book called programming in collective intelligence gives all their samples in python so go pick up a copy and read chapter 1.
Representing all of this as a graph will be somewhat problamatic as your undying representation is a Bipartile Graph. There are lots of recommendation approaches out there that use a graph based approach but its generally not the best performing approach.
Actually that is one of the sweetspots of a graph database like Neo4j.
So if your data model looks like this:
user -[:LIKE|:BOUGHT]-> item
You can easily get recommendations for an user with a cypher statement like this:
start user = node:users(id="doctorkohaku")
match user -[r:LIKE]->item<-[r2:LIKE]-other-[r3:LIKE]->rec_item
where r.stars > 2 and r2.stars > 2 and r3.stars > 2
return rec_item.name, count(*) as cnt, avg(r3.stars) as rating
order by rating desc, cnt desc limit 10
This can also be done using the Neo4j Core-API or the Traversal-API.
Neo4j has an Python API that is also able to run cypher queries.
Disclaimer: I work for Neo4j
There are also some interesting articles by Marko Rodriguez about collaborative filtering.
I can suggest to have a look at my open source project Reco4j. It is a graph-based recommendation engine that can be used on a graph database like yours in a very straigthforward way. We support as graph database neo4j. It is in an early version but very soon a more complete version will be available. In the meantime we are looking for some use case of our project, so please contact me so that we can see how we can collaborate.
Simple item-to-item recommendation systems are well-known and frequently implemented. An example is the Slope One algorithm. This is fine if the user hasn't rated many items yet, but once they have, I want to offer more finely-grained recommendations. Let's take a music recommendation system as an example, since they are quite popular. If a user is viewing a piece by Mozart, a suggestion for another Mozart piece or Beethoven might be given. But if the user has made many ratings on classical music, we might be able to make a correlation between the items and see that the user dislikes vocals or certain instruments. I'm assuming this would be a two-part process, first part is to find correlations between each users' ratings, the second would be to build the recommendation matrix from these extra data. So the question is, are they any open-source implementations or papers that can be used for each of these steps?
Taste may have something useful. It's moved to the Mahout project:
http://taste.sourceforge.net/
In general, the idea is that given a user's past preferences, you want to predict what they'll select next and recommend it. You build a machine-learning model in which the inputs are what a user has picked in the past and the attributes of each pick. The output is the item(s) they'll pick. You create training data by holding back some of their choices, and using their history to predict the data you held back.
Lots of different machine learning models you can use. Decision trees are common.
One answer is that any recommender system ought to have some of the properties you describe. Initially, recommendations aren't so good and are all over the place. As it learns tastes, the recommendations will come from the area the user likes.
But, the collaborative filtering process you describe is fundamentally not trying to solve the problem you are trying to solve. It is based on user ratings, and two songs aren't rated similarly because they are similar songs -- they're rated similarly just because similar people like them.
What you really need is to define your notion of song-song similarity. Is it based on how the song sounds? the composer? Because it sounds like the notion is not based on ratings, actually. That is 80% of the problem you are trying to solve.
I think the question you are really answering is, what items are most similar to a given item? Given your item similarity, that's an easier problem than recommendation.
Mahout can help with all of these things, except song-song similarity based on its audio -- or at least provide a start and framework for your solution.
There are two techniques that I can think of:
Train a feed-forward artificial neural net using Backpropagation or one of it's successors (e.g. Resilient Propagation).
Use version space learning. This starts with the most general and the most specific hypotheses about what the user likes and narrows them down when new examples are integrated. You can use a hierarchy of terms to describe concepts.
Common characteristics of these methods are:
You need a different function for
each user. This pretty much rules
out efficient database queries when
searching for recommendations.
The function can be updated on the fly
when the user votes for an item.
The dimensions along which you classify
the input data (e.g. has vocals, beats
per minute, musical scales,
whatever) are very critical to the
quality of the classification.
Please note that these suggestions come from university courses in knowledge based systems and artificial neural nets, not from practical experience.