在线性回归、逻辑回归、softmax回归中,学习的结果是\(p(y|x;\theta)\),也就是给定\(x\)的条件下,\(y\)的条件概率分布,给定一个新的输入\(x\),我们求出不同输出的概率,我们称这一类学习算法为判别学习算法​(discriminative learning algorithm);这一节,我们介绍另一类学习算法:生成学习算法(generative learning algorithm),在生成学习算法中,我们对\(p(x|y)\)\(p(y)\)建模,也就是说,我们求出各个标签的概率以及在给定标签下输入变量的分布,根据贝叶斯公式可以求出:
\[ \begin{align*} p(y|x) &= \frac{p(x|y)p(y)}{p(x)} \\ &= \frac{p(x|y)p(y)}{\sum_{j \in \mathcal{Y}}p(x|j)p(j)} \end{align*} \]
给定输入\(x\),我们预测其标签时,所求的就是:
\[ \begin{align*} y &= \arg\max_y p(y|x)\\ &= \arg\max_y \frac{p(x|y)p(y)}{p(x)}\\ &= \arg\max_y p(x|y)p(y) \end{align*} \]

高斯判别分析

模型求解

我们简单介绍一下多元高斯分布,多元高斯分布一般有两个参数:均值向量 \(\mu \in \mathbb{R}^{n}\)和协方差矩阵\(\Sigma \in \mathbb{R}^{n \times n}\)(半正定),其概率密度函数为:
\[ p(x;\mu,\Sigma) = \frac{1}{(2\pi)^{n/2}|\Sigma|^{1/2}}\exp\left(-\frac{1}{2}(x-\mu)^T\Sigma^{-1}(x-\mu)\right) \]
吴恩达老师在笔记中对多元高斯分布做了比较多的介绍,这里就不详细讲了。

SRE实战 互联网时代守护先锋,助力企业售后服务体系运筹帷幄!一键直达领取阿里云限量特价优惠。

我们介绍的第一个生成学习算法是高斯判别分析(Gaussian discriminant analysis, GDA),它处理的是二分类问题。在GDA中,我们用伯努利分布对\(p(y)\)建模,用多元高斯分布对\(p(x|y)\)建模,即:
\[ \begin{align*} y &\sim \text{Bernoulli}(\phi)\\ x|y=0 &\sim \mathcal{N}(\mu_0, \Sigma)\\ x|y=1 &\sim \mathcal{N}(\mu_1, \Sigma)\\ \end{align*} \]
\(GDA\)的模型参数为\(\phi, \mu_0, \mu_1, \Sigma\),我们求出对数似然,利用最大似然来求这些参数:
\[ \begin{align*} l(\phi, \mu_0, \mu_1, \Sigma) &= \log \prod_{i=1}^{m}p(x^{(i)},y^{(i)};\phi, \mu_0, \mu_1, \Sigma) \\ &= \sum_{i=1}^{m} \log p(x^{(i)},y^{(i)};\phi, \mu_0, \mu_1, \Sigma) \\ &= \sum_{i=1}^{m} \log p(y^{(i)};\phi)p(x^{(i)}|y^{(i)};\mu_0, \mu_1, \Sigma) \\ &= \sum_{i=1}^{m} \log \phi^{y^{(i)}}(1-\phi)^{1-y^{(i)}}(2\pi)^{-\frac{n}{2}}|\Sigma|^{-\frac{1}{2}}\exp\left(-\frac{1}{2}(1-y^{(i)})(x^{(i)}-\mu_0)^T\Sigma^{-1}(x^{(i)}-\mu_0)\right) \\ &\qquad\exp\left(-\frac{1}{2}y^{(i)}(x^{(i)}-\mu_1)^T\Sigma^{-1}(x^{(i)}-\mu_1)\right) \\ &= \sum_{i=1}^{m}y^{(i)}\log \phi + (1-y^{(i)})\log(1-\phi) - \frac{n}{2}\log(2\pi)-\frac{1}{2}\log |\Sigma|\\ &\qquad-\frac{1}{2}(1-y^{(i)})(x^{(i)}-\mu_0)^T\Sigma^{-1}(x^{(i)}-\mu_0)-\frac{1}{2}y^{(i)}(x^{(i)}-\mu_1)^T\Sigma^{-1}(x^{(i)}-\mu_1) \end{align*} \]
先对\(\phi\)求导:
\[ \begin{align*} \frac{\partial l(\phi,\mu_0,\mu_1,\Sigma)}{\partial \phi} &= \sum_{i=1}^{m}\frac{y^{(i)}}{\phi} + \frac{-(1-y^{(i)})}{1-\phi}\\ &= \sum_{i=1}^{m}\frac{y^{(i)}-\phi}{\phi(1-\phi)}\\ \end{align*} \]
\(\frac{\partial l(\phi,\mu_0,\mu_1,\Sigma)}{\partial \phi} = 0\)得:\(\phi = \frac{1}{m}\sum_{i=1}^{m}y^{(i)}\)

再对\(\mu_0\)求导:
\[ \begin{align*} \frac{\partial l(\phi,\mu_0,\mu_1,\Sigma)}{\partial \mu_0} &= \sum_{i=1}^{m}-\frac{1}{2}(1-y^{(i)})\cdot 2\Sigma^{-1}(x^{(i)}-\mu_0) \cdot (-1)\\ &= \Sigma^{-1}\sum_{i=1}^{m}(1-y^{(i)})(x^{(i)}-\mu_0)\\ \end{align*} \]
\(\frac{\partial l(\phi,\mu_0,\mu_1,\Sigma)}{\partial \mu_0}=0\)得:\(\mu_0 = \frac{\sum_{i=1}^{m}(1-y^{(i)})x^{(i)}}{\sum_{i=1}^{m}(1-y^{(i)})}\),同理:\(\mu_1 = \frac{\sum_{i=1}^{m}y^{(i)}x^{(i)}}{\sum_{i=1}^{m}y^{(i)}}\)

最后考虑对\(\Sigma\)求导:
\[ \begin{align*} \frac{\partial l(\phi,\mu_0,\mu_1,\Sigma)}{\partial \Sigma} &= \sum_{i=1}^{m}-\frac{1}{2}\cdot\frac{1}{|\Sigma|}\cdot\frac{\partial|\Sigma|}{\partial\Sigma}-\frac{1}{2}(1-y^{(i)})(x^{(i)}-\mu_0)(x^{(i)}-\mu_0)^T\cdot(-\Sigma^{-2})\\ &\quad-\frac{1}{2}y^{(i)}(x^{(i)}-\mu_1)(x^{(i)}-\mu_1)^T\cdot(-\Sigma^{-2})\\ &= \sum_{i=1}^{m}-\frac{1}{2}\cdot\frac{1}{|\Sigma|}\cdot|\Sigma|\Sigma^{-1}-\frac{1}{2}(x^{(i)}-\mu_{y^{(i)}})(x^{(i)}-\mu_{y^{(i)}})^T\cdot(-\Sigma^{-2})\\ &= \sum_{i=1}^{m}-\frac{1}{2}\cdot\Sigma^{-1}+\frac{1}{2}(x^{(i)}-\mu_{y^{(i)}})(x^{(i)}-\mu_{y^{(i)}})^T\cdot\Sigma^{-2}\\ \end{align*} \]
\(\frac{\partial l(\phi,\mu_0,\mu_1,\Sigma)}{\partial \Sigma} = 0\)得:\(\Sigma = \frac{1}{m}\sum_{i=1}^{m}(x^{(i)}-\mu_{y^{(i)}})(x^{(i)}-\mu_{y^{(i)}})^T\)

在对\(\Sigma\)求导的过程中,我们用到了一个结论:
\[ \frac{\partial |\Sigma|}{\partial \Sigma} = |\Sigma|\Sigma^{-1} \]

综上,GDA中的参数为:
\[ \begin{align*} \phi &= \frac{\sum_{i=1}^{m}1\left\{y^{(i)}=1\right\}}{m}\\ \mu_0 &= \frac{\sum_{i=1}^{m}1\left\{y^{(i)}=0\right\}x^{(i)}}{\sum_{i=1}^{m}1\left\{y^{(i)}=0\right\}}\\ \mu_1 &= \frac{\sum_{i=1}^{m}1\left\{y^{(i)}=1\right\}x^{(i)}}{\sum_{i=1}^{m}1\left\{y^{(i)}=1\right\}}\\ \Sigma &= \frac{\sum_{i=1}^{m}(x^{(i)}-\mu_{y^{(i)}})(x^{(i)}-\mu_{y^{(i)}})^T}{m} \end{align*} \]

GDA v.s. LR

我们把\(p(y=1|x;\phi, \Sigma, \mu_0, \mu_1)\)看作是\(x\)的函数:
\[ \begin{align*} &p(y=1|x;\phi, \Sigma, \mu_0, \mu_1)\\ = &\frac{p(y=1,x;\phi, \Sigma, \mu_0, \mu_1)}{p(x;\phi, \Sigma, \mu_0, \mu_1)}\\ = &\frac{p(y=1)p(x|y=1;\phi, \Sigma, \mu_0, \mu_1)}{p(y=0)p(x|y=0;\phi, \Sigma, \mu_0, \mu_1)+p(y=1)p(x|y=1;\phi, \Sigma, \mu_0, \mu_1)}\\ = &\frac{\phi \cdot (2\pi)^{-\frac{n}{2}}|\Sigma|^{-\frac{1}{2}}\exp(\frac{1}{2}(x-\mu_1)^T\Sigma^{-1}(x-\mu_1))}{\phi \cdot (2\pi)^{-\frac{n}{2}}|\Sigma|^{-\frac{1}{2}}\exp(\frac{1}{2}(x-\mu_1)^T\Sigma^{-1}(x-\mu_1)) + (1-\phi) \cdot (2\pi)^{-\frac{n}{2}}|\Sigma|^{-\frac{1}{2}}\exp(\frac{1}{2}(x-\mu_0)^T\Sigma^{-1}(x-\mu_0))}\\ = &\frac{\phi \cdot \exp(\frac{1}{2}(x-\mu_1)^T\Sigma^{-1}(x-\mu_1))}{\phi \cdot \exp(\frac{1}{2}(x-\mu_1)^T\Sigma^{-1}(x-\mu_1)) + (1-\phi) \cdot \exp(\frac{1}{2}(x-\mu_0)^T\Sigma^{-1}(x-\mu_0))}\\ = &\frac{1}{1 + \frac{1-\phi}{\phi} \cdot \exp(\frac{1}{2}(x-\mu_0)^T\Sigma^{-1}(x-\mu_0) - \frac{1}{2}(x-\mu_1)^T\Sigma^{-1}(x-\mu_1))}\\ = &\frac{1}{1 + \frac{1-\phi}{\phi} \cdot \exp(\frac{1}{2}x^T\Sigma^{-1}(\mu_1-\mu_0)+\frac{1}{2}(\mu_1-\mu_0)^T\Sigma^{-1}x + \frac{1}{2}\mu_0^T\Sigma^{-1}\mu_0 + \frac{1}{2}\mu_1^T\Sigma^{-1}\mu_1)}\\ = &\frac{1}{1 + \frac{1-\phi}{\phi} \cdot \exp(\frac{1}{2}(\mu_1-\mu_0)^T(\Sigma^{-1})^Tx+\frac{1}{2}(\mu_1-\mu_0)^T\Sigma^{-1}x + \frac{1}{2}\mu_0^T\Sigma^{-1}\mu_0 + \frac{1}{2}\mu_1^T\Sigma^{-1}\mu_1)}\\ = &\frac{1}{1 + \exp((\mu_1-\mu_0)^T\Sigma^{-1}x + \frac{1}{2}\mu_0^T\Sigma^{-1}\mu_0 + \frac{1}{2}\mu_1^T\Sigma^{-1}\mu_1 + \log(\frac{1-\phi}{\phi}))}\\ \end{align*} \]
\(p(y=1|x;\phi,\Sigma, \mu_0, \mu_1)\)恰好是\(x\)带有截距项时,逻辑回归的假设函数形式。

在推导逻辑回归时,我们只假设了\(y\)服从伯努利分布,而没有假设同类样本服从高斯分布,所以GDA做了更强的假设,我们可以认为GDA是逻辑回归的一种特例,两者比较:

  • 逻辑回归的假设较弱,所以适用范围广,鲁棒性更强
  • 当同类数据的确符合高斯分布时,GDA的效率更高

朴素贝叶斯

朴素贝叶斯(Naive Bayes)模型也是一种生成学习算法,与GDA类似,它需要对\(p(y)\)\(p(x|y)\)建模,不同的是,在GDA中,\(x\)是连续的,而在朴素贝叶斯中,\(x\)的取值是离散的,我们先介绍一种多元伯努利事件模型(multi-variate Bernoulli event model),在这一模型中,\(x\)的每个分量只有\(0\)\(1\)两种取值,也就是说,对于\(x \in \mathbb{R}^{n}\)\(x_i \in \left\{0, 1\right\}\)

朴素贝叶斯分类器

在多元伯努利事件模型中,对于\(n\)维的变量\(x\),可能的取值有\(2^n\)种,当\(n\)比较大时,要求出每种可能取值的概率是不现实的(如果用多项式分布对\(p(x|y)\)建模,那么参数的个数将是\(2^n-1\)个!),因此,我们需要引入一个很强的假设——朴素贝叶斯假设(Naive Bayes Assumption):

给定标签\(y\)的条件下,\(x\)的各个分量是条件独立的,即:

\(\forall i \neq j, y \in \{0,1\}, p(x_i|y,x_j) = p(x_i|y)\)

基于这一假设,我们得到的算法就是朴素贝叶斯分类器(Naive Bayes classifier),此时,我们可以将\(p(x|y)\)写作:
\[ \begin{align*} p(x|y) &= p(x_1x_2\cdots x_n|y)\\ &=p(x_1|y)p(x_2|y)\cdots p(x_n|y)\\ &= \prod_{i=1}^n p(x_i|y) \end{align*} \]
模型的参数为:
\[ \begin{align*} \phi_y &= p(y=1)\\ \phi_{i|y=0} &= p(x_i=1|y=0), \quad i=1,2,\dots,n\\ \phi_{i|y=1} &= p(x_i=1|y=1), \quad i=1,2,\dots,n\\ \end{align*} \]
给定训练数据集\(\left\{(x^{(i)}, y^{(i)})|i=1,2, \dots,m\right\}\),对数似然为:
\[ \begin{align*} &l(\phi_y, \phi_{i|y=0}, \phi_{i|y=1})\\ = &\log \prod_{i=1}^{m}p(x^{(i)},y^{(i)})\\ = &\log \prod_{i=1}^{m}p(y^{(i)})p(x^{(i)}|y^{(i)})\\ = &\log \prod_{i=1}^{m}\phi_y^{y^{(i)}}(1-\phi_y)^{1-y^{(i)}}\left[\prod_{j=1}^{n}\phi_{j|y=0}^{x^{(i)}_j}(1-\phi_{j|y=0})^{1-x^{(i)}_j}\right]^{1-y^{(i)}}\left[\prod_{j=1}^{n}\phi_{j|y=1}^{x^{(i)}_j}(1-\phi_{j|y=1})^{1-x^{(i)}_j}\right]^{y^{(i)}}\\ = &\sum_{i=1}^{m}y^{(i)}\log \phi_y + (1-y^{(i)}) \log(1-\phi_y)^{} +(1-y^{(i)})\left[\sum_{j=1}^{n}x^{(i)}_j\log\phi_{j|y=0} + (1-x^{(i)}_j)\log(1-\phi_{j|y=0})\right]\\ &+ y^{(i)}\left[\sum_{j=1}^{n}x^{(i)}_j\log\phi_{j|y=1} + (1-x^{(i)}_j)\log(1-\phi_{j|y=1})\right]\\ \end{align*} \]

令导数等于\(0\),可以得到:
\[ \begin{align*} \phi_y &= \frac{\sum_{i=1}^{m}y^{(i)}}{m} = \frac{\sum_{i=1}^{m}1\left\{y^{(i)}=1\right\}}{m}\\ \phi_{j|y=0} &= \frac{\sum_{i=1}^{m}(1-y^{(i)})x^{(i)}_j}{\sum_{i=1}^{m}(1-y^{(i)})} = \frac{\sum_{i=1}^{m}1\left\{y^{(i)}=0\land x^{(i)}_j=1\right\}}{\sum_{i=1}^{m}1\left\{y^{(i)}=0\right\}}\\ \phi_{j|y=1} &= \frac{\sum_{i=1}^{m}y^{(i)}x^{(i)}_j}{\sum_{i=1}^{m}y^{(i)}} = \frac{\sum_{i=1}^{m}1\left\{y^{(i)}=1\land x^{(i)}_j=1\right\}}{\sum_{i=1}^{m}1\left\{y^{(i)}=1\right\}}\\ \end{align*} \]

拉普拉斯平滑

现在,我们考虑一种情况:某个特征,比如说\(x_{35000}\),在所有训练数据中都没有出现过,则:
\[ \begin{align*} \phi_{35000|y=0} &= \frac{\sum_{i=1}^{m}1\left\{x^{(i)}_{35000}=1 \land y^{(i)}=0\right\}}{\sum_{i=1}^{m}1\left\{y^{(i)}=0\right\}}=0\\ \phi_{35000|y=1} &= \frac{\sum_{i=1}^{m}1\left\{x^{(i)}_{35000}=1 \land y^{(i)}=1\right\}}{\sum_{i=1}^{m}1\left\{y^{(i)}=1\right\}}=0\\ \end{align*} \]
考虑到我们在做预测时,所求的\(y\)是:
\[ \begin{align*} y &= \arg\max_y p(x|y)p(y)\\ &= \arg\max_y p(y)\prod_{i=1}^{n}p(x_i|y) \end{align*} \]
由于\(\phi_{35000|y=0}\)\(\phi_{35000|y=1}\)都是\(0\),所以两种类别的概率估计都是\(0\),无法进行预测。为了解决这一问题,我们引入拉普拉斯平滑(Laplace smoothing)。

考虑一个多项式分布随机变量\(z \in \left\{1, 2, \dots, k\right\}\),给定\(m\)个观测结果\(\left\{z^{(1)}, \dots, z^{(m)}\right\}\),通过最大似然,可以得到参数估计:
\[ \phi_j = \frac{\sum_{i=1}^m1\left\{z^{(i)}=j\right\}}{m} \]
我们不希望因为没有观测到某种结果,就认为这种结果的概率是\(0\),所以我们使用拉普拉斯平滑来估计参数:
\[ \phi_j = \frac{\sum_{i=1}^m1\left\{z^{(i)}=j\right\}+1}{m+k} \]
使用拉普拉斯平滑,朴素贝叶斯分类器中的参数估计变成:
\[ \begin{align*} \phi_{j|y=0} &= \frac{\sum_{i=1}^{m}1\left\{y^{(i)}=0\land x^{(i)}_j=1\right\}+1}{\sum_{i=1}^{m}1\left\{y^{(i)}=0\right\}+2}\\ \phi_{j|y=1} &= \frac{\sum_{i=1}^{m}1\left\{y^{(i)}=1\land x^{(i)}_j=1\right\}+1}{\sum_{i=1}^{m}1\left\{y^{(i)}=1\right\}+2}\\ \end{align*} \]

多项式事件模型

前面我们考虑的是\(x\)的每个分量只有\(0\)\(1\)两种取值的情况,这种模型被称为多元伯努利事件模型,现在我们介绍另一种模型多项式事件模型(multinomial event model),在这种模型中,\(x\)的各个分量独立且服从同一个多项式分布,模型的参数为:
\[ \begin{align*} \phi_y &= p(y)\\ \phi_{k|y=1} &= p(x_j=k|y=1)\\ \phi_{k|y=0} &= p(x_j=k|y=0)\\ \end{align*} \]
给定训练集\(\left\{(x^{(i)}, y^{(i)}); i = 1, \dots, m\right\}\),其中\(x^{(i)} = (x^{(i)}_1, \dots, x^{(i)}_{n_i})\),似然函数值为:
\[ \begin{align*} L(\phi_y, \phi_{k|y=1},\phi_{k|y=0}) &= \prod_{i=1}^{m}p(x^{(i)}, y^{(i)})\\ &= \prod_{i=1}^{m}\left(\prod_{j=1}^{n_i}p(x_j|y^{(i)};\phi_{k|y=1},\phi_{k|y=0})\right)p(y^{(i)};\phi_y) \end{align*} \]
通过最大似然,可以得到:
\[ \begin{align*} \phi_y &= \frac{\sum_{i=1}^m1\left\{y^{(i)}=1\right\}}{m}\\ \phi_{k|y=0} &= \frac{\sum_{i=1}^{m}\sum_{j=1}^{n_i}1\left\{y^{(i)}=0 \land x^{(i)}_j=k\right\}}{\sum_{i=1}^{m}1\left\{y^{(i)}=0\right\}n_i}\\ \phi_{k|y=1} &= \frac{\sum_{i=1}^{m}\sum_{j=1}^{n_i}1\left\{y^{(i)}=1 \land x^{(i)}_j=k\right\}}{\sum_{i=1}^{m}1\left\{y^{(i)}=1\right\}n_i}\\ \end{align*} \]
使用拉普拉斯平滑,参数估计为:
\[ \begin{align*} \phi_{k|y=0} &= \frac{\sum_{i=1}^{m}\sum_{j=1}^{n_i}1\left\{y^{(i)}=0 \land x^{(i)}_j=k\right\}+1}{\sum_{i=1}^{m}1\left\{y^{(i)}=0\right\}n_i+K}\\ \phi_{k|y=1} &= \frac{\sum_{i=1}^{m}\sum_{j=1}^{n_i}1\left\{y^{(i)}=1 \land x^{(i)}_j=k\right\}+1}{\sum_{i=1}^{m}1\left\{y^{(i)}=1\right\}n_i+K}\\ \end{align*} \]
考虑垃圾邮件分类的例子,在多元伯努利事件模型中,一封邮件的生成方式相当于先根据\(p(y)\)随机地确定是垃圾邮件或不是垃圾邮件,再遍历整个字典(字典的大小为\(n\)),对于第\(i\)个单词,根据\(p(x_i|y)\)随机地确定是否要在邮件中使用这个单词,并且每个单词是否被选择是独立的。

在多项式事件模型中,先根据\(p(y)\)随机地确定是垃圾邮件或者不是垃圾邮件,假设邮件有\(n\)个单词,对于第\(1\)个单词,根据\(p(x=k|y)\)随机地确定它是字典中的哪一个,然后对于第\(2\)个单词,与第一个单词独立但同分布地确定,以此类推,直至确定完\(n\)个单词。

扫码关注我们
微信号:SRE实战
拒绝背锅 运筹帷幄