统计学习：线性回归

| |

应用场景

1. 广告预算和销量有关吗？
2. 如果相关，这种相关性有多强？也就是给定广告预算，能否准确预测销售额？
3. 各种类型的广告对销售额都有影响么吗？
4. 如果准确评估每种广告对销售额的影响？
5. 如何准确预测未来销售额？
6. 广告和销售额是线性关系吗？
7. 广告媒体之间有协同效应（synergy effect）吗？协同效应在统计学中称为相关性（interaction effect）。例如：在电视和广播分别投放\$50000的广告和将这\$100000投放到其中一种媒体，效果一样吗？

单变量回归分析

\begin{aligned} SE\left(\hat\beta_0\right)^2&=\sigma^2\left[{1\over n}+{\bar x^2\over\sum_{i=1}^n(x_i-\bar x)^2}\right]\\ SE\left(\hat\beta_1\right)^2&={\sigma^2\over\sum_{i=1}^n(x_i-\bar x)^2}， \end{aligned}

$\sigma$表示均方差（standard deviation），$\sigma^2=Var(\epsilon)$。

$$RSE=\sqrt{RSS\over n-2}=\sqrt{\frac{1}{n-2}\sum_{i=1}^n\left(y_i-\hat y_i\right)^2}。 \label{eq:rse}$$

$RMSE=\sqrt{RSS\over n}。$

$$\hat\beta_0\pm 2\cdot SE\left(\hat\beta_0\right)，\qquad \hat\beta_1\pm 2\cdot SE\left(\hat\beta_1\right)。 \label{eq:lr-confidence-interval}$$

$$t={\hat\beta_1-0\over SE\left(\hat\beta_1\right)} \label{eq:t-statistic}$$

advertising <- read.csv('Advertising.csv')

# 结果如下：
Call:
lm(formula = Sales ~ TV, data = advertising)

Residuals:
Min      1Q  Median      3Q     Max
-8.3860 -1.9545 -0.1913  2.0671  7.2124

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.032594   0.457843   15.36   <2e-16 ***
TV          0.047537   0.002691   17.67   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 3.259 on 198 degrees of freedom
Multiple R-squared:  0.6119,    Adjusted R-squared:  0.6099
F-statistic: 312.1 on 1 and 198 DF,  p-value: < 2.2e-16

Coefficients的最后一列$Pr(>|t|)$表示$H_0$成立的概率，若很小就拒绝$H_0$接受$H_1$。拒绝$H_0$以为着$\beta_1\neq0$，表明$X$与$Y$相关。星号表示相关性强弱。最后一列表明销售额与电视广告有关系。$RSE$衡量了模型拟合结果与数据真实表现的差异。

$RSE$衡量了模型拟合结果与数据真实表现的差异。$R^2$统计量是度量拟合的另一种方式3， $$R^2 = {TSS-RSS\over TSS}=1-{RSS\over TSS}，\qquad TSS=\sum_{i=1}^n\left(y_i-\bar y\right)^2， \label{eq:r2-statistic}$$ TSS也记为SST。$R^2$越大越好。加入新的变量会使得$R^2$增大4，在测试集上$R^2$可能为负数，$R^2\in(-\infty, 1]$。若$R^2$接近$0$，这表明回归模型没有很好的解释响应变化，可能是线性模型有误或内在误差$\sigma^2$很高。本例中，$R^2\approx 0.61$表示少于三分之二的销售额变化能通过基于电视广告的回归模型解释。虽然$R^2\in[0,1]$，要比较哪个$R^2$更好还要根据具体应用而定。

$R^2$是衡量$X$与$Y$线性相关的统计量。相关系数（correlation） $$r=Cor(X,Y)={ \sum_{i=1}^n\left(x_i-\bar x\right)\left(y_i-\bar y\right) \over \sqrt{\sum_{i=1}^n\left(x_i-\bar x\right)^2}\sqrt{\sum_{i=1}^n\left(y_i-\bar y\right)^2} } \label{eq:correlation}$$ 也是衡量$X$与$Y$线性相关的统计量。事实上，$R^2=r^2$。

多变量回归分析

$p$个不同预测变量（predictor）的多变量回归模型为

$$Y=\beta_0+\beta_1X_1+\beta_2X_2+\cdots+\beta_pX_p+\epsilon。 \label{eq:population-regression-line2}$$

$E\{RSS/(n-p-1)\}=\sigma^2，$

$E\{(TSS-RSS)/p\}=\sigma^2。$

advertising <- read.csv('Advertising.csv')

# 结果如下：
Call:
lm(formula = Sales ~ TV + Radio + Newspaper, data = advertising)

Residuals:
Min      1Q  Median      3Q     Max
-8.8277 -0.8908  0.2418  1.1893  2.8292

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  2.938889   0.311908   9.422   <2e-16 ***
TV           0.045765   0.001395  32.809   <2e-16 ***
Radio        0.188530   0.008611  21.893   <2e-16 ***
Newspaper   -0.001037   0.005871  -0.177     0.86
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.686 on 196 degrees of freedom
Multiple R-squared:  0.8972,    Adjusted R-squared:  0.8956
F-statistic: 570.3 on 3 and 196 DF,  p-value: < 2.2e-16

$H_0:\beta_{p-q+1}=\beta_{p-q+2}=\cdots=\beta_p=0，$

$$F={(RSS_0-RSS)/q\over RSS/(n-p-1)}。 \label{eq:f-statistic2}$$

> cor(advertising[,2:5])

TV        1.00000000 0.05480866 0.05664787 0.7822244
Newspaper 0.05664787 0.35410375 1.00000000 0.2282990
Sales     0.78222442 0.57622257 0.22829903 1.0000000

编程实例

R处理因子类型变量

pisaTrain <- read.csv('Data/pisa2009train.csv')

# 忽略NA数据
pisaTrain <- na.omit(pisaTrain)
pisaTest <- na.omit(pisaTest)

# White出现频率最高，排在最前面
pisaTrain$raceeth = relevel(pisaTrain$raceeth, "White")
pisaTest$raceeth = relevel(pisaTest$raceeth, "White")

# 带因子（factor）变量的线性回归
lmScore <- lm(readingScore ~ ., data = pisaTrain)

step逐步回归

R中的stepMASS包中stepAIC函数的简版，它通过AIC准则进行模型选择，可以得到减缓的模型。

# 接上例
lmScoreSimple <- step(lmScore)

参考资料

脚注

1. Approximately for several reasons. Equation relies on the assumption that the errors are Gaussian. Also, the factor of $2$ in front of the $SE\left(\hat\beta_1\right)$ term will vary slightly depending on the number of observations $n$ in the linear regression. To be precise, rather than the number $2$, Equation should contain the $97.5\%$ quantile of a $t$-distribution with $n−2$ degrees of freedom.

2. 如何理解$t$分布、$p$值？

3. 在测试集上计算TSS时，$\bar y$是在训练集上计算的。

4. The model’s $R^2$ value can never decrease from adding new variables to the model. This is due to the fact that it is always possible to set the coefficient for the new variable to zero in the new model. However, this would be the same as the old model. So the only reason to make the coefficient non-zero is if it improves the $R^2$ value of the model, since linear regression picks the coefficients to minimize the error terms, which is the same as maximizing the $R^2$.

5. This slightly counterintuitive result is very common in many real life situations. Consider an absurd example to illustrate the point. Running a regression of shark attacks versus ice cream sales for data collected at a given beach community over a period of time would show a positive relationship, similar to that seen between sales and newspaper. Of course no one (yet) has suggested that ice creams should be banned at beaches to reduce shark attacks. In reality, higher temperatures cause more people to visit the beach, which in turn results in more ice cream sales and more shark attacks. A multiple regression of attacks versus ice cream sales and temperature reveals that, as intuition implies, the former predictor is no longer significant after adjusting for temperature.

6. 偏效应：回归模型中的其他因素保持不变时，某个解释变量对因变量的影响。