Skip to content

2022

Weighted regression

Weighted regression consists on assigning different weights to each observation and hence more or less importance at the time of fitting the regression.

On way to look at it is to think as solving the regression problem minimizing Weighted Mean Squared Error(WSME) instead of Mean Squared Error(MSE)

\(\(WMSE(\beta, w) = \frac{1}{N} \sum_{i=1}^n w_i(y_i - \overrightarrow {x_i} \beta)^2\)\) Intuitively, we are looking fot the coefficients that minimize MSE but putting different weights to each observation. OLS is a particular case where all the \(w_i = 1\)

Why doing this? A few reasons (Shalizi 2015. Chapter 7.1)

  • Focusing Accuracy: We want to predict specially well some particular points or region of points, maybe because that's the focus for production or maybe because being wrong at those observations has a huge cost, etc. Using Weighted regression will do an extra effort to match that data.

  • Discount imprecision: OLS returns the maximum likelihood estimates when the residuals are independent, normal with mean 0 and with constant variance. When we face non constant variance OLS no longer returns the MLE. The logic behind using weighted regression is that makes no sense to pay equal attention to all the observations since some of them have higher variance and are less indicative of the conditional mean. We should put more emphasis on the regions of lower variance, predict it well and "expect to fit poorly where the noise is big".
    The weights that will return MLE are \(\frac{1}{\sigma_i^2}\)

  • Sampling bias: If we think or know that the observations in our data are not completely random and some subset of the population might be under-represented (in a survey for example or because of data availability) it might make sense to weight observation inversely to the probability of being included in the sample. Under-represented observations will get more weights and over-represented less weight.
    Another similar situation is related to covariate shift. If the distribution of variable x changes over time we can use a weight designed as the ratio of the probability density functions.

    "If the old pdf was p(x) and the new one is q(x), the weight we'd want to is \(w_i=q(x_i)/p(x_i)\)

  • Other: Related to GLM, when the conditional mean is a non linear function of a linear predictor. (Not further explained in the book at this point)

Is there a scenario where OLS is better than Weighted regression? Assuming we can compute the weights.

Example.

First we will see the impact of using weighted regression, using a simulated scenario where we actually know the variance of the error of each observation. This is not realistic but useful to see it in action.

library(tidyverse)

We generate 1000 datapoints with a linear relation between y and x. Intercept = 0, slope = 5. We let the variance of the error depend on the value of x. Higher values of x are associated with higher values of the variance of the error.

set.seed(23)
n=1000
x = runif(n,0,10)
error = rnorm(n,0, x/1.5)
df = data.frame(x)
df = df %>% mutate(y = 5*x + error)
Visually..

ggplot(data=df, aes(x=x, y=y)) +
  geom_point(alpha=0.3) + 
  # geom_smooth(color="blue") +
  # geom_smooth(method = "lm", mapping = aes(weight = (1/sqrt(x)^2)),
  #             color = "red", show.legend = FALSE) +
  NULL

Image

Linear regression
ols = lm(formula = "y~x", data=df)
summary(ols)

## 
## Call:
## lm(formula = "y~x", data = df)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -14.868  -1.720  -0.137   1.918  14.722 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  0.19192    0.24278   0.791    0.429    
## x            4.95585    0.04148 119.489   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 3.855 on 998 degrees of freedom
## Multiple R-squared:  0.9347, Adjusted R-squared:  0.9346 
## F-statistic: 1.428e+04 on 1 and 998 DF,  p-value: < 2.2e-16
We get an intercept of 0.19, non-significant when the actual value is 0 and a slope of 4.96 when the actual value is 5.

Weighted linear regression
wols = lm(formula = "y~x", data=df, weights = (1/sqrt(x)^2) )
summary(wols)
## 
## Call:
## lm(formula = "y~x", data = df, weights = (1/sqrt(x)^2))
## 
## Weighted Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.8880 -0.8601 -0.0016  0.8936  4.6535 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 0.001483   0.030072   0.049    0.961    
## x           4.993473   0.021874 228.286   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.498 on 998 degrees of freedom
## Multiple R-squared:  0.9812, Adjusted R-squared:  0.9812 
## F-statistic: 5.211e+04 on 1 and 998 DF,  p-value: < 2.2e-16

We get an intercept of 0, non-significant too but much closer to 0 and with lower standard error and a slope of 4.99 also much closer to the actual value of 5 and with lower standard error.

Conclusion: if we know the right weights we can get better estimates from a linear regression in case of heteroscedasticity.

Inference is not valid in the dataset used for model selection.

Let's say we have a dataset and we want to fit a model to it and do some inference such as obtaining the coefficients and look for their confidence intervals.

For such a task we would first need to find a model that we think approximates to the real data generating process behind the phenomenon.
This will be the model selection step.
Then we would look at the output of our model and get the standard error of the coefficients or calculate the confidence interval or any other similar task. This will be the inference step.

The issue here is that, if we don't know the true model and we do model selection, our own model will be a random object. Why? Because the particular dataset we are using is also a set of random variables. Other datasets might return another model formula as the best between our options since that particular dataset would have other observations and particularities.

Main problem:

since we are selecting a model based on a particular dataset, the standard errors and p-values will be smaller than then actual ones.

"That means there is some extra randomness in your estimated parameters (and everything else), which isn't accounted for by formulas which assume a fixed model.
This is not just a problem with formal model-selection devices like cross-validation. If you do an initial, exploratory data analysis before deciding which model to use - and that's generally a good idea - you are, yourself, acting as a noisy, complicated model-selection device" (Sharizi 2017)

The most straightforward way to deal with this (if you are using independent observations) is to split the data, do model selection in one part and then fit the best model in the other part. Your second fit will be the one useful for inference.
You could fit the model to the full data but that would include the part used for model selection and you would still get false, overconfident standard errors.

Let's see an example.
We will generate data following a somewhat "complicated" model with interactions. We will split the data in two equal size parts. One for model selection and one for inference.
We will then fit a couple formulas to model selection part and pick the one with the minimum RMSE. We will compare the standard errors obtained in the model selection part and the ones obtained fitting that model to the inference part.

Thanks to BrunoRodrigues for this post that I used as guideline to fit models with Cross Validation in R.

We start by generating the data, including interactions.

set.seed(1)
N = 5000
b0 = 4
b1 = 1
b2 = 2
b3 = 3
b4 = 4
b5 = 5

x1 = runif(N, 0, 10)
x2 = rnorm(N, 20, 3)
x3 = runif(N, 20, 40)
error = rnorm(N, 0, 200)

y = b0 + b1*x1 + b2*x2 + b3*x3 + b4*x1*x2 + b5*x2*x3 + error


df = tibble(y, x1, x2, x3)

We do the first split, df_selection will be the one used to try different models and pick one.
df_inference will be used to do the actual inference given the model selected.

prop = 0.5

selection_inference_split = initial_split(df, prop=prop)

df_selection = training(selection_inference_split)
df_inference = testing(selection_inference_split)

To select a model using df_selection we will use Cross validation to try to get the model that best generalizes.
We will generate 30 split of 70% of the data and use the other 30% to calculate RMSE metric.

validation_data <- mc_cv(df_selection, prop = 0.7, times = 30)

We create two functions, my_lm() will run a linear regression for the training part of each split of CV and get the prediction for the testing part of each split. We will run this for a couple of formulas.
return_model will fit the model to the whole training data to extract the parameters and standard errors we get if we use the same dataset that was used to do model selection.

my_lm <- function(formula, split, id){

    analysis_set = analysis(split)  

    model <- lm(formula = formula, data=analysis_set)

    assessment_set <- assessment(split)


    pred = tibble::tibble("id" = id,
        "formula" = formula,
        "truth" = assessment_set$y,
        "prediction" = unlist(predict(model, newdata = assessment_set)))

}


return_model = function(formula){


    model <- lm(formula = formula, data=df_selection)
    output = list(model$coefficients, summary(model))

}

We will try 5 formulas. The first one is the actual data generating process and should the best in terms of RMSE. We will exclude that one for model selection since the aim of this is to simulate a scenario where we don't know the actual formula behind the data. We calculate it just for reference but we will pick one of the other 4 models for inference.

formulas = list("y ~ x1 + x2 +x3 + x1*x2 + x2*x3", 
                "y ~ .", 
                "y ~ x1 + x2", 
                "y ~ x1 + x2 + x3 + x1*x2",
                "y ~ x1 + x2 + x3 + x2*x3")
results = data.frame()

models = list()
for (formula in formulas){

results_selection <- map2_df(.x = validation_data$splits,
                           .y = validation_data$id,
                           ~my_lm(formula = formula, split = .x, id = .y))

model = return_model(formula)


results = rbind.data.frame(results, results_selection)
models = c(models, model )

}

We retrieve the mean RMSE across the splits, calculated in the test part of each split.
We can see that the real model is the best in terms of RMSE. Between the others, we can see that the one including the x2:x3 interaction is the best. So, we will keep that one as our "model selected"

results %>%
    group_by(id, formula) %>%
    rmse(truth, prediction) %>%
    group_by(formula) %>%
    summarise(mean_rmse = mean(.estimate)) %>%
    as.data.frame()
##                           formula mean_rmse
## 1                           y ~ .  219.4756
## 2                     y ~ x1 + x2  625.0173
## 3        y ~ x1 + x2 + x3 + x1*x2  217.3185
## 4        y ~ x1 + x2 + x3 + x2*x3  198.9802
## 5 y ~ x1 + x2 +x3 + x1*x2 + x2*x3  196.4747

We can check the parameters and the standard errors when fitted to the whole selection dataset.

## 
## Call:
## lm(formula = formula, data = df_selection)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -854.07 -132.91   -0.97  137.65  714.53 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)    
## (Intercept) -245.1084   138.9684  -1.764   0.0779 .  
## x1            82.7919     1.3540  61.148   <2e-16 ***
## x2            15.7118     6.8628   2.289   0.0221 *  
## x3            -1.5177     4.5626  -0.333   0.7394    
## x2:x3          5.1676     0.2256  22.906   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 199.1 on 2495 degrees of freedom
## Multiple R-squared:  0.947,  Adjusted R-squared:  0.9469 
## F-statistic: 1.115e+04 on 4 and 2495 DF,  p-value: < 2.2e-16

And let's see what happens if we fit the same model to the inference set.

model_test = lm(formula=formulas[[5]], data=df_inference)

summary(model_test)

## 
## Call:
## lm(formula = formulas[[5]], data = df_inference)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -656.47 -138.67   -5.64  130.21  773.99 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)    
## (Intercept) -438.7059   140.2618  -3.128 0.001782 ** 
## x1            81.4724     1.3622  59.812  < 2e-16 ***
## x2            23.4856     6.9475   3.380 0.000735 ***
## x3             3.6309     4.5942   0.790 0.429417    
## x2:x3          4.9750     0.2275  21.869  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 199.4 on 2495 degrees of freedom
## Multiple R-squared:  0.9473, Adjusted R-squared:  0.9472 
## F-statistic: 1.121e+04 on 4 and 2495 DF,  p-value: < 2.2e-16
First we can see that the parameters have changed a bit.
In second place we can see that the standard errors are generally bigger in comparison to the parameter for the inference set and will generate a wider confidence interval.

ggplot(data = ratio_df) +
  geom_point(aes(x=parameter, y=ratio, col=set), size=3) +
  theme(legend.title = element_blank()) +
  theme_light() +
  xlab("") +
  ylab("Ratio") +
  ggtitle("Absolute ratio between SD and Estimate")

Image

My idea is to add a plot with the confidence intervals so the effect can be seen directly but I don't have the time today. Anyways, it is clear that the standad error to parameter ratio is bigger in the inference set, showing that the inference in the same dataset as model selection is invalid as it is overconfident in the results.

Remarks on R2

R2 depends on the variance on the variance of the predictors

Quoting from Shalizi[^1] Assuming a true linear model
$$ Y = aX + \epsilon$$
and assuming we know \(a\) exactly.
The variance of Y will be \(a^2\mathbb{V}[X] + \mathbb{V}[\epsilon]\).
So \(R^2 = \frac{a^2\mathbb{V}[X]}{a^2\mathbb{V}[X] + \mathbb{V}[\epsilon]}\)
This goes to 0 as \(\mathbb{V}[X] \rightarrow 0\) and it goes to 1 as \(\mathbb{V}[X] \rightarrow \infty\). "It thus has little to do with the quality of the fit, and a lot to do with how spread out the predictor variable is. Notice also how easy it is to get a high \(R^2\) even when the true model is not linear!"

Below a quick comparison between two linear relationships, one with much higher variance than the other in the predictor.
Added a different constant for better display in plot.

library(tidyverse)

x1 = rnorm(1000, mean=0, sd=1)
x2 = rnorm(1000, mean=0, sd=10)
error = rnorm(1000, mean=0, sd=0.5)

y1 = x1 + error
y2 = 10 + x2 +  error

df = data.frame(x1,x2,y1,y2)

model1 = lm("y1 ~ x1")
## Error in eval(predvars, data, env): object 'y1' not found
model2 =  lm("y2 ~ x2")
## Error in eval(predvars, data, env): object 'y2' not found

Linear Smoothers

Linear regression as smoothing

Let's assume the DGP (data generating process) is: $$ Y = \mu(x) + \epsilon$$ where \(\mu(x)\) is the mean Y value for that particular x and \(\epsilon\) is an error with mean 0.

When running OLS we are trying to approximate \(\mu(x)\) with a linear function of the form \(\alpha + \beta x\) and trying to retrieve the best \(\alpha\) and \(\beta\) minimizing the mean-squared error.

The conclusions don't change but the math gets easier if we assume both X and Y are centered (mean=0).
With that in mind we can write down the MSE and optimize to get the best parameters.

\[MSE(\alpha, \beta) = \mathbb{E}[(Y - \alpha - \beta X)^2] \\ = \mathbb{E}[\mathbb{E}[(Y - \alpha - \beta X)^2 | X]] \\ = \mathbb{E}[\mathbb{V}[Y|X]] + \mathbb{E}[Y- \alpha - \beta X | X])^2] \\ = \mathbb{E}[\mathbb{V}[Y|X]] + \mathbb{E}[(\mathbb{E}[Y- \alpha - \beta X | X])^2]\]

Deriving with respect to \(\alpha\) and \(\beta\) for optimization..
The first term can be dropped since doesn't include any parameter.

$$\frac{\partial MSE}{\partial \alpha} = \mathbb{E}[2(Y - \alpha - \beta X)(-1)] \ \mathbb{E}[Y - a - b X] = 0 \ a = \mathbb{E}[Y] - b \mathbb{E}[X] = 0 $$ when Y and X are centered..

and $$\frac{\partial MSE}{\partial \beta} = \mathbb{E}[2(Y - \alpha - \beta X)(-X)] \ \mathbb{E}[XY] - b\mathbb{E}[X^2] = 0 \ b = \frac{Cov[X,Y]}{\mathbb{V}[X]} $$

The optimal beta is a function of the covariance between Y and X, and the variance of X.

Putting together \(a\) and \(b\) we get \(\mu(x) = x \frac{Cov[X,Y]}{\mathbb{V}[X]}\)

Replacing with the values from the sampled data we get an estimation of \(a\) and \(b\).

Remember they are 0 centered so variance and covariance get simplified.

\[ \hat a = 0 \\ \hat b = \frac{\sum_i y_i x_i}{\sum_i x_i^2}\]

With all this we can see how OLS is a smoothing of the data.
Writing in terms of the data points:
$$\hat \mu(x) = \hat b x \ = x \frac{\sum_i y_i x_i}{\sum_i x_i^2} \ = \sum_i y_i \frac{x_i}{\sum_j x_j^2} x \ = \sum_i y_i \frac{x_i}{n \hat \sigma_x^2} x $$ where \(\hat \sigma_x^2\) is the sample variance of X.
In words, our prediction is a weighted average of the observed values \(y_i\) of the dependent variable, where the weights are proportional to how far \(x_i\) is from the center (relative to the variance), and proportional to the magnitude of \(x\). If \(x_i\) is on the same side of the center as \(x\), it gets a positive weight, and if it's on the opposite side it gets a negative weight. (Shalizi 2017)

If \(\mu(x)\) is really a straight line, this is fine, but when it's not, that the weights are proportional to how far they are to the center and not the point to predict can lead to awful predictions.

Alternative smoothers

For that, other methods smooth the data in another ways to help mitigate that.

As quick examples, we have KNN regression where the smoothing is done using only close observations to the one to predict (and getting quite noisy since depend a lot on the sample points around a small area).

Kernel smoothers are a variant where depending on the kernel selected we get different smoothing. The main idea is that we use a windowd of data with the idea of putting more weight to points close to the one to predict. Could be Gaussian weight around X for example, or uniform around a window. Note this is different than KNN regression since we do not take the average of those points, we get a regression for that area.
A nice thing about this smoothers (and KNN regression) is that if we want to predict points far from the training data we won't get a linear extrapolation as with OLS but it will be pushed towards the closest data points we had in training.

Bias Variance Tradeoff

Mean squared error (MSE) is a measure of how far our prediction is from the true values of the dependent variable. It's the expectation of the squared error.

The squared error being:

\[(Y - \hat \mu(x))^2\]

where Y is the true value and \(\hat \mu(x)\) is the prediction for a given x.

We can decompose it into:

\[(Y - \hat \mu(x))^2 \\ = (Y - \mu(x) + \mu(x) - \hat \mu(x)^2) \\ = (Y - \mu(x))^2 + 2(Y - \mu(x))(\mu(x) - \hat \mu(x)) + (\mu(x) - \hat \mu(x))^2\]

So, that's the squared error. The MSE is the expectation of that.

The expectation is a linear operator so we can apply it independently to different terms of a summation.
The expectation of the first term is the variance of the error intrinsic to the DGP.
The second term goes to 0 because involves \(E(Y-\mu(x))\) that is the expectation of the error and that's equal to 0.
The third term reamins as it is since doesn't involve random variables.

\[MSE(\hat \mu(x)) = \sigma^2_x + (\mu(x) - \hat \mu(x))^2\]

This is our first bias-variance decomposition. The first term is the intrinsic difficulty of the problem to model, is the variance of the error and can not be reduced, it is what it is.
The second term is how off our predictions are regarding the true expected value for that particular X.

This would be fine if we wouldn't need to consider \(\hat \mu(x)\) a random variable itself, since it is dependent on the specific dataset we are using. Given another dataset our estimation would be different despite using the same model methodology.
What we actually want is the MSE of the method used \(\hat M\) and not only the result of a particular realization.

\[MSE(\hat M_n(x)) = E[(Y - \hat M_n(X))^2 | X=x] \\ = ... \\ = \sigma^2_x + (\mu(x) - E[\hat M_n(x)])^2 - V[\hat M_n(x)] \]

This is our 2nd bias-variance decomposition.
The first term is still the irreducible error.
The second term is the bias of using \(\hat M_n\) to approximate \(\mu(x)\). Is the approximation bias/error.
The third term is the variance of the estimate of the regression function. If our estimates have high variance we can have large errors despite using an unbiased approximation.

Flexible methods will be able to approximate \(\mu(x)\) closely, however usually using more flexible methods involve increasing the variance of the estimate. That's the bias-variance tradeoff. We need to evaluate how to balance that, sometimes including some bias reduce much more the error by decreasing the variance.
Usually larger N decreases the MSE since it decreases bias and variance error.

Reference

Based on 1.4.1 from Advanced data analysis from a elementary point of view.