GBrank: what is the final model? - algorithm

A colleague and I cannot reach a consensus in what the GBrank model (after training) should look like.
Introduction
The method starts by performing the typical conversion from pairwise to a pointwise dataset, where the target variable z represents now a score which should be higher, zi > zj when i is preferred over j. The authors then suggest using Gradient Boosting Trees, and "punish" cases where that model is predicting zj > zj and i is preferred over j. The "punishment" is performed by switching the scores and also incrementing or decrementing by τ.
Disagreement
Where we disagree is whether Gradient Boosting Rank is itself an ensembler. That is, is the model we are training gk or is it hk?
Reference Material
The original article: http://www.cc.gatech.edu/~zha/papers/fp086-zheng.pdf

h is used in the whole paper to denote the hypothesis you are working with, g is just a domain specific regression model used to construct h, thus GBRank is hk. In particular, it is a boosting method, thus it has to be an ensemble, trained by building a strong learner from set of weak learners (from definition of boosting posed by Kearns and Valiant in late '80) - h is an ensemble (due to recurrent definition), g is not (as it is just a regressor trained on some transformed dataset).

Related

How is the class center for a decision attribute calculated in class center based fuzzification algorithm?

I came across class center based fuzzification algorithm on page 16 of this research paper on TRFDT. However, I fail to understand what is happening in step 2 of this algorithm (titled in the paper as Algorithm 2: Fuzzification). If someone could explain it by giving a small example it would certainly be helpful.
It is not clear from your question which parts of the article you understand and IMHO the article is written in not the clearest possible way, so this is going to be a long answer.
Let's start with some intuition behind this article. In short I'd say it is: "let's add fuzziness everywhere to decision trees".
How a decision tree works? We have a classification problem and we say that instead of analyzing all attributes of a data point in a holistic way, we'll analyze them one by one in an order defined by the tree and will navigate the tree until we reach some leaf node. The label at that leaf node is our prediction. So the trick is how to build a good tree i.e. a good order of attributes and good splitting points. This is a well studied problem and the idea is to build a tree that encode as much information as possible by some metric. There are several metrics and this article uses entropy which is similar to widely used information gain.
The next idea is that we can change the classification (i.e. split of the values into a classes) as fuzzy rather than exact (aka "crisp"). The idea here is that in many real life situations not all members of the class are equally representative: some a more "core" examples and some a more "edge" example. If we can catch this difference, we can provide a better classification.
And finally there is a question of how similar the data points are (generally or by some subset of attributes) and here we can also have a fuzzy answer (see formulas 6-8).
So the idea of the main algorithm (Algorithm 1) is the same as in the ID3 tree: recursively find the attribute a* that classifies the data in the best way and perform the best split along that attribute. The main difference is in how the information gain for the best attribute selection is measured (see heuristic in formulas 20-24) and that because of fuzziness the usual stop rule of "only one class left" doesn't work anymore and thus another entropy (Kosko fuzzy entropy in 25) is used to decide if it is time to stop.
Given this skeleton of the algorithm 1 there are quite a few parts that you can (or should) select:
How do you measure μ(ai)τ(Cj)(x) used in (20) (this is a measure of how well x represents the class Cj with respect to attribute ai, note that here being not in Cj and far from the points in Cj is also good) with two obvious choices of the lower (16 and 18) and the upper bounds (17 and 19)
How do you measure μRτ(x, y) used in (16-19). Given that R is induced by ai this becomes μ(ai)τ(x, y) which is a measure of similarity between two points with respect to attribute ai. Here you can choose one of the metrics (6-8)
How do you measure μCi(y) used in (16-19). This is the measure of how well the point y fits in the class Ci. If you already have data as fuzzy classification, there is nothing you should do here. But if your input classification is crisp, then you should somehow produce μCi(y) from that and this is what the Algorithm 2 does.
There is a trivial solution of μCj(xi) = "1 if xi ∈ Cj and 0 otherwise" but this is not fuzzy at all. The process of building fuzzy data is called "fuzzification". The idea behind the Algorithm 2 is that we assume that every class Cj is actually some kind of a cluster in the space of attributes. And so we can measure the degree of membership μCj(xi) as the distance from the xi to the center of the cluster cj (the closer we are, the higher the membership should be so it is really some inverse of a distance). Note that since distance is measured by attributes, you should normalize your attributes somehow or one of them might dominate the distance. And this is exactly what the Algorithm 2 does:
it estimates the center of the cluster for class Cj as the center of mass of all the known points in that class i.e. just an average of all points by each coordinate (attribute).
it calculates the distances from each point xi to each estimated center of class cj
looking into formula at step #12 it uses inverse square of the distance as a measure of proximity and just normalizes the value because for fuzzy sets Sum[over all Cj](μCj(xi)) should be 1

What is the exact difference between a model and an algorithm?

What is the exact difference between a model and an algorithm?
Let us take as an example logistic regression. Is logistic regression an model or an algorithm, and why?
An algorithm is the general approach you will take. The model is what you get when you run the algorithm over your training data and what you use to make predictions on new data.
You can generate a new model with the same algorithm but with different data, or you can get a new model from the same data but with a different algorithm.
Do you like Ferrari? They have a very nice 812 Superfast model, but they also have other models. Every model is different and leads to a different behavior and experience.
Think of a model more like a mathematical description of a system. An equation that gives you a general way how to achieve your vision or an idea. For example:
is a model function that yields a straight line (see least squares linear regression).
Whereas an algorithm is a set of actions (or rules) that you need to perform in order to implement your vision. For example, the famous minimax algorithm often used in AI game players that have to choose the next move.
To finish my above idea, imagine that a Ferrari model is an already existing idea on a paper and an algorithm is a robot in a factory that performs its set of programmed actions. It is sequence of actions. This is naively speaking of course, but hopefully you get the idea.
An algorithm is a mathematical formula like linear regression for example. Linear regression (with one variable) defines a line in 2-D space. But the slope and position of the line cannot be determined unless some sample values are available to solve the equation.
This regression line can be represented mathematically as y = mx + a.
Once sample values (or training data) is applied to solve this equation, the line can be drawn in 2-D space.
This line now becomes the model with known slope (m) and intercept (a). Using this model, the value of y (label) can be determined for a given value of x (feature).

Pseudo-code for Network-only-bayes-classifier

I am trying to implement a classification toolkit for univariate network data using igraph and python.
However, my question is actually more of an algorithms question in relational classification area instead of programming.
I am following Classification in Networked Data paper.
I am having a difficulty to understand what this paper refers to "Network-Only Bayes Classifier"(NBC) which is one of the relational classifiers explained in the paper.
I implemented Naive Bayes classifier for text data using bag of words feature representation earlier. And the idea of Naive Bayes on text data is clear on my mind.
I think this method (NBC) is a simple translation of the same idea to the relational classification area. However, I am confused with the notation used in the equations, so I couldn't figure out what is going on. I also have a question on the notation used in the paper here.
NBC is explained in page 14 on the paper,
Summary:
I need the pseudo-code of the "Network-Only Bayes Classifier"(NBC) explained in the paper, page 14.
Pseudo-code notation:
Let's call vs the list of vertices in the graph. len(vs) is the
length. vs[i] is the ith vertex.
Let's assume we have a univariate and binary scenario, i.e., vs[i].class is either 0 or 1 and there is no other given feature of a node.
Let's assume we run a local classifier before so that every node has an initial label, which are calculated by the local classifier. I am only interested in with the relational classifier part.
Let's call v the vertex we are trying to predict, and v.neighbors() is the list of vertices which are neighbors of v.
Let's assume all the edge weights are 1.
Now, I need the pseudo-code for:
def NBC(vs, v):
# v.class is 0 or 1
# v.neighbors is list of neighbor vertices
# vs is the list of all vertices
# This function returns 0 or 1
Edit:
To make your job easier, I did this example. I need the answer for last 2 equations.
In words...
The probability that node x_i belongs to the class c is equal to:
The probability of the neighbourhood of x_i (called N_i) if x
belonged indeed to the class c; Multiplied by ...
The probability of the class c itself; Divided by ...
The probability of the neighbourhood N_i (of node x_i) itself.
As far as the probability of the neighbourhood N_i (of x_i) if x were to belong to the class c is concerned, it is equal to:
A product of some probability; (which probability?)
The probability that some node (v_j) of the neighbourhood (N_i) belongs to the class c if x belonged indeed to the class c
(raised to the weight of the edge connecting the node that is being examined and the node that is being classified...but you are not interested in this...yet). (The notation is a bit off here I think, why do they define v_j and then never use it?...Whatever).
Finally, multiply the product of some probability with some 1/Z. Why? Because all ps are probabilities and therefore lie within the range of 0 to 1, but the weights w could be anything, meaning that in the end, the calculated probability could be out of range.
The probability that some x_i belongs to a class c GIVEN THE
EVIDENCE FROM ITS NEIGHBOURHOOD, is a posterior probability. (AFTER
something...What is this something? ... Please see below)
The probability of appearance of neighbourhood N_i if x_i
belonged to the class c is the likelihood.
The probability of the class c itself is the prior probability.
BEFORE something...What is this something? The evidence. The prior
tells you the probability of the class without any evidence
presented, but the posterior tells you the probability of a specific
event (that x_i belongs to c) GIVEN THE EVIDENCE FROM ITS
NEIGHBOURHOOD.
The prior, can be subjective. That is, derived by limited observations OR be an informed opinion. In other words, it doesn't have to be a population distribution. It only has to be accurate enough, not absolutely known.
The likelihood is a bit more challenging. Although we have here a formula, the likelihood must be estimated from a large enough population or as much "physical" knowledge about the phenomenon being observed as possible.
Within the product (capital letter Pi in the second equation that expresses the likelihood) you have a conditional. The conditional is the probability that a neighbourhood node belongs to some class if x belonged to class c.
In the typical application of the Naive Bayesian Classifier, that is document classification (e.g. spam mail), the conditional that an email is spam GIVEN THE APPEARANCE OF SPECIFIC WORDS IN ITS BODY is derived by a huge database of observations, or, a huge database of emails that we really, absolutely know which class they belong to. In other words, I must have an idea of how does a spam email looks like and eventually, the majority of spam emails converge to having some common theme (I am some bank official and i have a money opportunity for you, give me your bank details to wire money to you and make you rich...).
Without this knowledge, we can't use Bayes rule.
So, to get back to your specific problem. In your PDF, you have a question mark in the derivation of the product.
Exactly.
So the real question here is: What is the likelihood from your Graph / data?
(...or Where are you going to derive it from? (obviously, either a large number of known observations OR some knowledge about the phenomenon. For example, what is the likelihood that a node is infected given that a proportion of its neighbourhood is infected too)).
I hope this helps.

Machine Learning Algorithm for Completing Sparse Matrix Data

I've seen some machine learning questions on here so I figured I would post a related question:
Suppose I have a dataset where athletes participate at running competitions of 10 km and 20 km with hilly courses i.e. every competition has its own difficulty.
The finishing times from users are almost inverse normally distributed for every competition.
One can write this problem as a matrix:
Comp1 Comp2 Comp3
User1 20min ?? 10min
User2 25min 20min 12min
User3 30min 25min ??
User4 30min ?? ??
I would like to complete the matrix above which has the size 1000x20 and a sparseness of 8 % (!).
There should be a very easy way to complete this matrix, since I can calculate parameters for every user (ability) and parameters for every competition (mu, lambda of distributions). Moreover the correlation between the competitions are very high.
I can take advantage of the rankings User1 < User2 < User3 and Item3 << Item2 < Item1
Could you maybe give me a hint which methods I could use?
Your astute observation that this is a matrix completion problem gets
you most of the way to the solution. I'll codify your intuition that
the combination of ability of a user and difficulty of the course
yields the time of a race, then present various algorithms.
Model
Let the vector u denote the speed of the users so that u_i is user i's
speed. Let the vector v denote the difficulty of the courses so
that v_j is course j's difficulty. Also when available, let t_ij be user i's time on
course j, and define y_ij = 1/t_ij, user i's speed on course j.
Since you say the times are inverse Gaussian distributed, a sensible
model for the observations is
y_ij = u_i * v_j + e_ij,
where e_ij is a zero-mean Gaussian random variable.
To fit this model, we search for vectors u and v that minimize the
prediction error among the observed speeds:
f(u,v) = sum_ij (u_i * v_j - y_ij)^2
Algorithm 1: missing value Singular Value Decomposition
This is the classical Hebbian
algorithm. It
minimizes the above cost function by gradient descent. The gradient of
f wrt to u and v are
df/du_i = sum_j (u_i * v_j - y_ij) v_j
df/dv_j = sum_i (u_i * v_j - y_ij) u_i
Plug these gradients into a Conjugate Gradient solver or BFGS
optimizer, like MATLAB's fmin_unc or scipy's optimize.fmin_ncg or
optimize.fmin_bfgs. Don't roll your own gradient descent unless you're willing to implement a very good line search algorithm.
Algorithm 2: matrix factorization with a trace norm penalty
Recently, simple convex relaxations to this problem have been
proposed. The resulting algorithms are just as simple to code up and seem to
work very well. Check out, for example Collaborative Filtering in a Non-Uniform World:
Learning with the Weighted Trace Norm. These methods minimize
f(m) = sum_ij (m_ij - y_ij)^2 + ||m||_*,
where ||.||_* is the so-called nuclear norm of the matrix m. Implementations will end up again computing gradients with respect to u and v and relying on a nonlinear optimizer.
There are several ways to do this, perhaps the best architecture to try first is the following:
(As usual, as a preprocessing step normalize your data into a uniform function with 0 mean and 1 std deviation as best you can. You can do this by fitting a function to the distribution of all race results, applying its inverse, and then subtracting the mean and dividing by the std deviation.)
Select a hyperparameter N (you can tune this as usual with a cross validation set).
For each participant and each race create an N-dimensional feature vector, initially random. So if there are R races and P participants then there are R+P feature vectors with a total of N(R+P) parameters.
The prediction for a given participant and a given race is a function of the two corresponding feature vectors (as a first try use the scalar product of these two vectors).
Alternate between incrementally improving the participant feature vectors and the race feature vectors.
To improve a feature vector use gradient descent (or some more complex optimization method) on the known data elements (the participant/race pairs for which you have a result).
That is your loss function is:
total_error = 0
forall i,j
if (Participant i participated in Race j)
actual = ActualRaceResult(i,j)
predicted = ScalarProduct(ParticipantFeatures_i, RaceFeatures_j)
total_error += (actual - predicted)^2
So calculate the partial derivative of this function wrt the feature vectors and adjust them incrementally as per a usual ML algorithm.
(You should also include a regularization term on the loss function, for example square of the lengths of the feature vectors)
Let me know if this architecture is clear to you or you need further elaboration.
I think this is a classical task of missing data recovery. There exist some different methods. One of them which I can suggest is based on Self Organizing Feature Map (Kohonen's Map).
Below it's assumed that every athlet record is a pattern, and every competition data is a feature.
Basically, you should divide your data into 2 sets: first - with fully defined patterns, and second - patterns with partially lost features. I assume this is eligible because sparsity is 8%, that is you have enough data (92%) to train net on undamaged records.
Then you feed first set to the SOM and train it on this data. During this process all features are used. I'll not copy algorithm here, because it can be found in many public sources, and even some implementations are available.
After the net is trained, you can feed patterns from the second set to the net. For each pattern the net should calculate best matching unit (BMU), based only on those features that exist in the current pattern. Then you can take from the BMU its weigths, corresponding to missing features.
As alternative, you could not divide the whole data into 2 sets, but train the net on all patterns including the ones with missing features. But for such patterns learning process should be altered in the similar way, that is BMU should be calculated only on existing features in every pattern.
I think you can have a look at the recent low rank matrix completion methods.
The assumption is that your matrix has a low rank compared to the matrix dimension.
min rank(M)
s.t. ||P(M-M')||_F=0
M is the final result, and M' is the uncompleted matrix you currently have.
This algorithm minimizes the rank of your matrix M. P in the constraint is an operator that takes the known terms of your matrix M', and constraint those terms in M to be the same as in M'.
The optimization of this problem has a relaxed version, which is:
min ||M||_* + \lambda*||P(M-M')||_F
rank(M) is relaxed to its convex hull ||M||_* Then you trade off the two terms by controlling the parameter lambda.

What is the difference between a generative and a discriminative algorithm? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
What is the difference between a generative and a
discriminative algorithm?
Let's say you have input data x and you want to classify the data into labels y. A generative model learns the joint probability distribution p(x,y) and a discriminative model learns the conditional probability distribution p(y|x) - which you should read as "the probability of y given x".
Here's a really simple example. Suppose you have the following data in the form (x,y):
(1,0), (1,0), (2,0), (2, 1)
p(x,y) is
y=0 y=1
-----------
x=1 | 1/2 0
x=2 | 1/4 1/4
p(y|x) is
y=0 y=1
-----------
x=1 | 1 0
x=2 | 1/2 1/2
If you take a few minutes to stare at those two matrices, you will understand the difference between the two probability distributions.
The distribution p(y|x) is the natural distribution for classifying a given example x into a class y, which is why algorithms that model this directly are called discriminative algorithms. Generative algorithms model p(x,y), which can be transformed into p(y|x) by applying Bayes rule and then used for classification. However, the distribution p(x,y) can also be used for other purposes. For example, you could use p(x,y) to generate likely (x,y) pairs.
From the description above, you might be thinking that generative models are more generally useful and therefore better, but it's not as simple as that. This paper is a very popular reference on the subject of discriminative vs. generative classifiers, but it's pretty heavy going. The overall gist is that discriminative models generally outperform generative models in classification tasks.
A generative algorithm models how the data was generated in order to categorize a signal. It asks the question: based on my generation assumptions, which category is most likely to generate this signal?
A discriminative algorithm does not care about how the data was generated, it simply categorizes a given signal.
Imagine your task is to classify a speech to a language.
You can do it by either:
learning each language, and then classifying it using the knowledge you just gained
or
determining the difference in the linguistic models without learning the languages, and then classifying the speech.
The first one is the generative approach and the second one is the discriminative approach.
Check this reference for more details: http://www.cedar.buffalo.edu/~srihari/CSE574/Discriminative-Generative.pdf.
In practice, the models are used as follows.
In discriminative models, to predict the label y from the training example x, you must evaluate:
which merely chooses what is the most likely class y considering x. It's like we were trying to model the decision boundary between the classes. This behavior is very clear in neural networks, where the computed weights can be seen as a complexly shaped curve isolating the elements of a class in the space.
Now, using Bayes' rule, let's replace the in the equation by . Since you are just interested in the arg max, you can wipe out the denominator, that will be the same for every y. So, you are left with
which is the equation you use in generative models.
While in the first case you had the conditional probability distribution p(y|x), which modeled the boundary between classes, in the second you had the joint probability distribution p(x, y), since p(x | y) p(y) = p(x, y), which explicitly models the actual distribution of each class.
With the joint probability distribution function, given a y, you can calculate ("generate") its respective x. For this reason, they are called "generative" models.
Here's the most important part from the lecture notes of CS299 (by Andrew Ng) related to the topic, which really helps me understand the difference between discriminative and generative learning algorithms.
Suppose we have two classes of animals, elephant (y = 1) and dog (y = 0). And x is the feature vector of the animals.
Given a training set, an algorithm like logistic regression or the perceptron algorithm (basically) tries to find a straight line — that is, a decision boundary — that separates the elephants and dogs. Then, to classify
a new animal as either an elephant or a dog, it checks on which side of the
decision boundary it falls, and makes its prediction accordingly. We call these discriminative learning algorithm.
Here's a different approach. First, looking at elephants, we can build a
model of what elephants look like. Then, looking at dogs, we can build a
separate model of what dogs look like. Finally, to classify a new animal,
we can match the new animal against the elephant model, and match it against
the dog model, to see whether the new animal looks more like the elephants
or more like the dogs we had seen in the training set. We call these generative learning algorithm.
The different models are summed up in the table below:
Image source: Supervised Learning cheatsheet - Stanford CS 229 (Machine Learning)
Generally, there is a practice in machine learning community not to learn something that you don’t want to. For example, consider a classification problem where one's goal is to assign y labels to a given x input. If we use generative model
p(x,y)=p(y|x).p(x)
we have to model p(x) which is irrelevant for the task in hand. Practical limitations like data sparseness will force us to model p(x) with some weak independence assumptions. Therefore, we intuitively use discriminative models for classification.
The short answer
Many of the answers here rely on the widely-used mathematical definition [1]:
Discriminative models directly learn the conditional predictive distribution p(y|x).
Generative models learn the joint distribution p(x,y) (or rather, p(x|y) and p(y)).
Predictive distribution p(y|x) can be obtained with Bayes' rule.
Although very useful, this narrow definition assumes the supervised setting, and is less handy when examining unsupervised or semi-supervised methods. It also doesn't apply to many contemporary approaches for deep generative modeling. For example, now we have implicit generative models, e.g. Generative Adversarial Networks (GANs), which are sampling-based and don't even explicitly model the probability density p(x) (instead learning a divergence measure via the discriminator network). But we call them "generative models” since they are used to generate (high-dimensional [10]) samples.
A broader and more fundamental definition [2] seems equally fitting for this general question:
Discriminative models learn the boundary between classes.
So they can discriminate between different kinds of data instances.
Generative models learn the distribution of data.
So they can generate new data instances.
Image source
A closer look
Even so, this question implies somewhat of a false dichotomy [3]. The generative-discriminative "dichotomy" is in fact a spectrum which you can even smoothly interpolate between [4].
As a consequence, this distinction gets arbitrary and confusing, especially when many popular models do not neatly fall into one or the other [5,6], or are in fact hybrid models (combinations of classically "discriminative" and "generative" models).
Nevertheless it's still a highly useful and common distinction to make. We can list some clear-cut examples of generative and discriminative models, both canonical and recent:
Generative: Naive Bayes, latent Dirichlet allocation (LDA), Generative Adversarial Networks (GAN), Variational Autoencoders (VAE), normalizing flows.
Discriminative: Support vector machine (SVM), logistic regression, most deep neural networks.
There is also a lot of interesting work deeply examining the generative-discriminative divide [7] and spectrum [4,8], and even transforming discriminative models into generative models [9].
In the end, definitions are constantly evolving, especially in this rapidly growing field :) It's best to take them with a pinch of salt, and maybe even redefine them for yourself and others.
Sources
Possibly originating from "Machine Learning - Discriminative and Generative" (Tony Jebara, 2004).
Crash Course in Machine Learning by Google
The Generative-Discriminative Fallacy
"Principled Hybrids of Generative and Discriminative Models" (Lasserre et al., 2006)
#shimao's question
Binu Jasim's answer
Comparing logistic regression and naive Bayes:
cs.cmu.edu/~tom/mlbook/NBayesLogReg.pdf
"On Discriminative vs. Generative classifiers"
Comment on "On Discriminative vs. Generative classifiers"
https://www.microsoft.com/en-us/research/wp-content/uploads/2016/04/DengJaitly2015-ch1-2.pdf
"Your classifier is secretly an energy-based model" (Grathwohl et al., 2019)
Stanford CS236 notes: Technically, a probabilistic discriminative model is also a generative model of the labels conditioned on the data. However, the term generative models is typically reserved for high dimensional data.
An addition informative point that goes well with the answer by StompChicken above.
The fundamental difference between discriminative models and generative models is:
Discriminative models learn the (hard or soft) boundary between classes
Generative models model the distribution of individual classes
Edit:
A Generative model is the one that can generate data. It models both the features and the class (i.e. the complete data).
If we model P(x,y): I can use this probability distribution to generate data points - and hence all algorithms modeling P(x,y) are generative.
Eg. of generative models
Naive Bayes models P(c) and P(d|c) - where c is the class and d is the feature vector.
Also, P(c,d) = P(c) * P(d|c)
Hence, Naive Bayes in some form models, P(c,d)
Bayes Net
Markov Nets
A discriminative model is the one that can only be used to discriminate/classify the data points.
You only require to model P(y|x) in such cases, (i.e. probability of class given the feature vector).
Eg. of discriminative models:
logistic regression
Neural Networks
Conditional random fields
In general, generative models need to model much more than the discriminative models and hence are sometimes not as effective. As a matter of fact, most (not sure if all) unsupervised learning algorithms like clustering etc can be called generative, since they model P(d) (and there are no classes:P)
PS: Part of the answer is taken from source
A generative algorithm model will learn completely from the training data and will predict the response.
A discriminative algorithm job is just to classify or differentiate between the 2 outcomes.
All previous answers are great, and I'd like to plug in one more point.
From generative algorithm models, we can derive any distribution; while we can only obtain the conditional distribution P(Y|X) from the discriminative algorithm models(or we can say they are only useful for discriminating Y’s label), and that's why it is called discriminative model. The discriminative model doesn't assume that the X's are independent given the Y($X_i \perp X_{-i} | Y$) and hence is usually more powerful for calculating that conditional distribution.
My two cents:
Discriminative approaches highlight differences
Generative approaches do not focus on differences; they try to build a model that is representative of the class.
There is an overlap between the two.
Ideally both approaches should be used: one will be useful to find similarities and the other will be useful to find dis-similarities.
This article helped me a lot in understanding the concept.
In summary,
Both are probabilistic models, meaning they both use probability (conditional probability , to be precise) to calculate classes for the unknown data.
The Generative Classifiers apply Joint PDF & Bayes Theorem on the data set and calculate conditional probability using values from those.
The Discriminative Classifiers directly find Conditional probablity on the data set
Some good reading material: conditional probability , Joint PDF

Resources