I'm working on an algorithm to fill a book with a number of pictures.
Each has a single picture of an collage of 8, 10 or 12 pictures.
There is a constraint on the maximum pages in the book.
I need an efficient algorithm to get layout the pages so that:
All pictures are used
The number of pages is as close to the maximum as possible
I could resolve this with a little recursion but I recently read something about Dynamic Programming and figured this might be a good problem for DP.
Having absolutely zero experience with DP I did some research but I didn't find a good example/tutorial that describes a problem like this.
Could someone give me a explanation about how and where to start?
This is strictly related to the graph algorithm(not SEO or anything). I'm interested in knowing if there are other algorithms out there that solely use the structure of a graph(not content like keywords, etc) to make inferences?
So for example, if your given a large graph full of nodes how can you make inferences assuming you have no idea what the values within the nodes actually mean(for example, pagerank knows who's linking(edges) to whom and doesn't know anything about the content itself)?
This is not exclusive to web searching, anything that uses graph structure to make inferences.
As well as HITS [as suggested by #larsmans], there is also SALSA, which is concidered more "stable" from HITS [and thus is less vulnerable to be affected by spammers].
You are also encourage to have a look at this survey or ranking algorithms
The main alternative to PageRank is HITS.
Another alternative to page rank is OPIC.
I'm trying to come up with a scholarly reference for the variant Inverse Distance Weighting algorithm where only the closest N points are used. The Wikipedia page for IDW lists it at the bottom under the heading Modified Shepard's Algorithm, but the information there is pretty sketchy.
This algorithm is in common use in the GIS world (see the bottom of this ArcGIS Desktop Help page for a simple description). Does anyone know of a better (preferably authoritative) reference?
The citation in the Wikipedia page is the original paper by Shepard, and is heavily cited. It does not get more authoritative than that. Otherwise, a good book on remote sensing or GIS could be an adequate reference.
The modified Shepard version is attributed to Franke and Nielson in this paper, and is also heavily cited in the literature (you can get an idea of citation counts in Google Scholar).
I am a student interested in developing a search engine that indexes pages from my country. I have been researching algorithms to use for sometime now and I have identified HITS and PageRank as the best out there. I have decided to go with PageRank since it is more stable than the HITS algorithm (or so I have read).
I have found countless articles and academic papers related to PageRank, but my problem is that I don't understand most of the mathematical symbols that form the algorithm in these papers. Specifically, I don't understand how the Google Matrix (the irreducible,stochastic matrix) is calculated.
My understanding is based on these two articles:
http://online.redwoods.cc.ca.us/instruct/darnold/LAPROJ/fall2005/levicob/LinAlgPaperFinal2-Screen.pdf
http://ilpubs.stanford.edu:8090/386/1/1999-31.pdf
Could someone provide a basic explanation (examples would be nice) with less mathematical symbols?
Thanks in advance.
The formal defintion of PageRank, as defined at page 4 of the cited document, is expressed in the mathematical equation with the funny "E" symbol (it is in fact the capital Sigma Greek letter. Sigma is the letter "S" which here stands for Summation).
In a nutshell this formula says that to calculate the PageRank of page X...
For all the backlinks to this page (=all the pages that link to X)
you need to calculate a value that is
The PageRank of the page that links to X [R'(v)]
divided by
the number of links found on this page. [Nv]
to which you add
some "source of rank", [E(u)] normalized by c
(we'll get to the purpose of that later.)
And you need to make the sum of all these values [The Sigma thing]
and finally, multiply it by a constant [c]
(this constant is just to keep the range of PageRank manageable)
The key idea being this formula is that all web pages that link to a given page X are adding to value to its "worth". By linking to some page they are "voting" in favor of this page. However this "vote" has more or less weight, depending on two factors:
The popularity of the page that links to X [R'(v)]
The fact that the page that links to X also links to many other pages or not. [Nv]
These two factors reflect very intuitive ideas:
It's generally better to get a letter of recommendation from a recognized expert in the field than from a unknown person.
Regardless of who gives the recommendation, by also giving recommendation to other people, they are diminishing the value of their recommendation to you.
As you notice, this formula makes use of somewhat of a circular reference, because to know the page range of X, you need to know the PageRank of all pages linking to X. Then how do you figure these PageRank values?... That's where the next issue of convergence explained in the section of the document kick in.
Essentially, by starting with some "random" (or preferably "decent guess" values of PageRank, for all pages, and by calculating the PageRank with the formula above, the new calculated values get "better", as you iterate this process a few times. The values converge, i.e. they each get closer and closer to what is the actual/theorical value. Therefore by iterating a sufficient amount of times, we reach a moment when additional iterations would not add any practical precision to the values provided by the last iteration.
Now... That is nice and dandy, in theory. The trick is to convert this algorithm to something equivalent but which can be done more quickly. There are several papers that describe how this, and similar tasks, can be done. I don't have such references off-hand, but will add these later. Beware they do will involve a healthy dose of linear algebra.
EDIT: as promised, here are a few links regarding algorithms to calculate page rank.
Efficient Computation of PageRank Haveliwala 1999 ///
Exploiting the Block Structure of the Web for Computing PR Kamvar etal 2003 ///
A fast two-stage algorithm for computing PageRank Lee et al. 2002
Although many of the authors of the links provided above are from Stanford, it doesn't take long to realize that the quest for efficient PageRank-like calculation is a hot field of research. I realize this material goes beyond the scope of the OP, but it is important to hint at the fact that the basic algorithm isn't practical for big webs.
To finish with a very accessible text (yet with many links to in-depth info), I'd like to mention Wikipedia's excellent article
If you're serious about this kind of things, you may consider an introductory/refresher class in maths, particularly linear algebra, as well a computer science class that deal with graphs in general. BTW, great suggestion from Michael Dorfman, in this post, for OCW's video of 1806's lectures.
I hope this helps a bit...
If you are serious about developing an algorithm for a search engine, I'd seriously recommend you take a Linear Algebra course. In the absence of an in-person course, the MIT OCW course by Gilbert Strang is quite good (video lectures at http://ocw.mit.edu/OcwWeb/Mathematics/18-06Spring-2005/VideoLectures/).
A class like this would certainly allow you to understand the mathematical symbols in the document you provide-- there's nothing in that paper that wouldn't be covered in a first-year Linear Algebra course.
I know this isn't the answer you are looking for, but it's really the best option for you. Having someone try to explain the individual symbols or algorithms to you when you don't have a good grasp of the basic concepts isn't a very good use of anybody's time.
This is the paper that you need: http://infolab.stanford.edu/~backrub/google.html (If you do not recognise the names of the authors, you will find more information about them here: http://www.google.com/corporate/execs.html).
The symbols used in the document, are described in the document in lay English.
Thanks for making me google this.
You might also want to read the introductory tutorial on the mathematics behind the construction of the Pagerank matrix written by David Austin's entitled How Google Finds Your Needle in the Web's Haystack; it starts with a simple example and builds to the full definition.
"The $25,000,000,000 Eigenvector: The Linear Algebra Behind Google". from Rose-Hulman is a bit out of date, because now Page Rank is the $491B linear algebra problem. I think the paper is very well written.
"Programming Collective Intelligence" has a nice discussion of Page Rank as well.
Duffymo posted the best refernce in my opinion. I studied the page rank algorithm in my senior undergrad year. Page rank is doing the following:
Define the set of current webpages as the states of a finite markov chain.
Define the probability of transitioning from site u to v where the there is an outgoing link to v from u to be
1/u_{n} where u_{n} is the number of out going links from u.
Assume the markov chain defined above is irreducible (this can be enforced with only a slight degradation of the results)
It can be shown every finite irreducible markov chain has a stationary distribution. Define the page rank to be the stationary distribution, that is to say the vector that holds the probability of a random particle to end up at each given site as the number of state transitions goes to infinity.
Google uses a slight variation on the power method to find the stationary distribution (the power method finds dominant eigenvalues). Other than that there is nothing to it. Its rather simple and elegant and probably one of the simplest applications of markov chains I can think of, but it is wortha lot of money!
So all the pagerank algorithm does is take into account the topology of the web as an indication of whether a website should be important. The more incoming links a site has the greater the probability of a random particle spending its time at the site over an infinite amount of time.
If you want to learn more about page rank with less math, then this is very good tutorial on basic matrix operations. I recommend it for everyone who has little math background but wants to dive into ranking algorithms.
Is the Google PageRank calculated as one value for a whole website (domain) or is it computed for every single webpage?
How much Google follows publicly known PageRank algorithm is their trade secret. In generic algorithm page rank is calculated per document.
Edit: original, generic PageRank explained http://www.ams.org/featurecolumn/archive/pagerank.html
Google PageRank
Here is a snippet and the link to an explanation from Google Themselves:
http://www.google.com/corporate/tech.html
PageRank Technology: PageRank reflects
our view of the importance of web
pages by considering more than 500
million variables and 2 billion terms.
Pages that we believe are important
pages receive a higher PageRank and
are more likely to appear at the top
of the search results.
PageRank also considers the importance
of each page that casts a vote, as
votes from some pages are considered
to have greater value, thus giving the
linked page greater value. We have
always taken a pragmatic approach to
help improve search quality and create
useful products, and our technology
uses the collective intelligence of
the web to determine a page's
importance.
per webpage.
It should be per web page.