Building a sentiment classifier
In the beginning of the chapter we devoted a section to understand kernel density estimation and how it can be leveraged to approximate the probability density function for the given samples from a random variable. We are going to use it in this section.
We have a set of tweets positively labeled. Another set of tweets negatively labeled. The idea is to learn the PDF of these two data sets independently using kernel density estimation.
From Bayes rule, we know that
P(Label | x)Â =Â P(x| label) * P(label) / P(x)
Here, P(x | label) is the likelihood
, P(label) is prior, and P(x) is the evidence. Here the label can be positive sentiment or negative sentiment.
Using the PDF learned from kernel density estimation, we can easily calculate the likelihood
, P(x | label)
From our class distribution, we know the prior P(label)
For any new tweet, we can now calculate using the Bayes Rule,
P(Label = Positive | words and their delta tfidf weights)
P(Label = Negative | words and their...