Skip to content

statistics

Expectation Maximization for Poisson process

The problem

What if you want to run a regression to estimate the coefficients and retrieve the data generating process of some data BUT you are in a scenario with censored data? How does that affect my estimation? Can I do better?

Censored data

In short, you have censored data when some observation is unobserved or constrained because of some specific reason which is natural and unavoidable.

Unobserved

For instance, in survival analysis, if the subject is not yet dead (luckily) you can not observe the time of death yet and hence you don't know if he will die tomorrow or in two years. You can't observe yet the death date.

Constrained

For goods with limited demand, once you deplete your stock, by definition you can't sell more and you can't observe all the demand that you would have if more items were able to be sold. Or an Airline , when selling seats, can't observe the full demand after all the seats have been sold, maybe more people were willing to flight.

We will simulate an example of the latest case, for an airline with simple toy data.

import numpy as np
import statsmodels.api as sm
from scipy.stats import poisson
import matplotlib.pyplot as plt

We create some fake data, where seats sold are generated from a poisson distribution, with an intercept (2) and a coefficient related to how many days are left to the departure (-0.2), the further from the departure, the less seats are sold.

What we also do, is that on the last date, we constrain the demand. We don't get the full seats that would have been sold under a poisson distribution but the amount minus 3. Only ourselves know that because we want to tweak the data. In real life we would see data similar to the one generated and we couldn't see the actual sales that would have been present in a complete poisson generating process (that actually is how the tickets are sold in our example)

# Step 1: Generate data
np.random.seed(100)
n_samples = 20
n_features = 1
substract_sales = 3

# Generate features
X = np.random.normal(size=(n_samples, n_features))
X = np.array(range(n_samples+1, 1, -1))
X = sm.add_constant(X)  # Add intercept term

# True coefficients
beta_true = np.array([2, -0.2])

# Generate Poisson-distributed target values
linear_pred = np.dot(X, beta_true)
lambda_true = np.exp(linear_pred)
y = np.random.poisson(lambda_true)
# y

# Right-censoring threshold in the last date (when the fare is closed)
censor_threshold = np.concatenate([(y+1)[:-1],np.array([y[-1]-substract_sales])])
y_censored = np.where(y > censor_threshold, censor_threshold, y)
is_censored = (y > censor_threshold)
Maybe it's easier to visualize.
The green curve is the actual lambda parameter for each data, based on the intercept and the real coefficient multiplied by days to departure.
The blue dots are the actual data points we see for all dates except from the last one.
The grey dot is the data we should see if there was no limit of how many seats to sell, following the poisson distribution.
The red dot is the actual data point we have, the last day the airline sold all the remaining seats and couldn't fulfill the true demand and hence we see lower sales than the ones generated by the poisson distribution.

# Plot the data
plt.xlabel('Days to departure')
plt.ylabel('Bookings')
plt.title('Data Plot')
plt.plot(X[:-1, 1], y[:-1], 'o')
plt.plot(X[:, 1], lambda_true, label='Lambda True', color="green")
# plt.plot(X[:, 1], lambda_est, label='Lambda Estimated')
plt.scatter(X[-1, 1], y[-1:],  color="grey", label="Real unseen demand" )
plt.scatter(X[-1, 1], y[-1:]-substract_sales, color='red', label='Constrained')
plt.legend()
plt.show()

Image

Esimation
Poisson Regression

First we estimate the parameters using a poisson regression. We assume the data follows a poisson process and we don't do anything to manage the constrained data.
The results are kind of ok, we get some closeness to the true parameters but it is clearly biased.

model_censored = sm.GLM(y_censored, X, family=sm.families.Poisson())
results_censored = model_censored.fit()
print(f'Estimated Censored coefficients: {results_censored.params}')
## Estimated Censored coefficients: [ 1.86976922 -0.21124541]
Expectation Maximization

OK, here is our alternative. What we can do, a bit more sophisticated, is to apply EM algorithm, which is a iterative process, to estimate the coefficients of the poisson regression, including the knowledge we have, that there might be constrained data.

# Step 2: Initialize parameters
beta_est = np.zeros(n_features + 1)
tol = 1e-4
max_iter = 1000

for iteration in range(max_iter):
    # Step 3: E-step
    # Estimate the expected values of censored data
    lambda_est = np.exp(np.dot(X, beta_est))
    expected_y = np.where(
        is_censored,
        (censor_threshold + 1) / (1 - poisson.cdf(censor_threshold, lambda_est)),
        y_censored
    )
    # Ensure expected_y values are valid
    expected_y = np.nan_to_num(expected_y, nan=np.mean(y_censored), posinf=np.max(y_censored), neginf=0)
    # Step 4: M-step
    # Update parameter estimates using Poisson regression
    model = sm.GLM(expected_y, X, family=sm.families.Poisson())
    results = model.fit(method="lbfgs")
    # results = model.fit_regularized(L1_wt=0, alpha=0.1) 
    new_beta_est = results.params

    # Check convergence
    if np.linalg.norm(new_beta_est - beta_est) < tol:
        break

    beta_est = new_beta_est
## <string>:7: RuntimeWarning: divide by zero encountered in divide
print(f'Estimated coefficients: {beta_est}')

## Estimated coefficients: [ 2.11015996 -0.23573144]
We can see our estimates are of course not perfect but the intercept is closer to the true parameter.
Let's visualize the results to have a clear intuition.

uncensored_predictions = results.predict(X)
censored_predictions = results_censored.predict(X)
Promising!
For the furthest dates we see a bias in both approaches, we are not doing better than the Poisson regression, but neither worse.
As we move closer the the departure, the EM algorithm predictions get closer and closer to the true lambdas, while the regular poisson regression continues it's biased trajectory.
The EM procedure adjusted the coefficients to better match the constrained data.

plt.plot(X[:, 1], lambda_true, label='Lambda True', color="green")
plt.plot(X[:, 1], uncensored_predictions, label='Uncensored Predictions (EM)', color="blue")
plt.plot(X[:, 1], censored_predictions, label='Censored Predictions', color="red")
plt.xlabel('Days to departure')
plt.ylabel('Lambda - Poisson regression')
plt.title('Lambda Predictions')
plt.legend()
plt.show()

Image

Weighted regression

Weighted regression consists on assigning different weights to each observation and hence more or less importance at the time of fitting the regression.

On way to look at it is to think as solving the regression problem minimizing Weighted Mean Squared Error(WSME) instead of Mean Squared Error(MSE)

\(\(WMSE(\beta, w) = \frac{1}{N} \sum_{i=1}^n w_i(y_i - \overrightarrow {x_i} \beta)^2\)\) Intuitively, we are looking fot the coefficients that minimize MSE but putting different weights to each observation. OLS is a particular case where all the \(w_i = 1\)

Why doing this? A few reasons (Shalizi 2015. Chapter 7.1)

  • Focusing Accuracy: We want to predict specially well some particular points or region of points, maybe because that's the focus for production or maybe because being wrong at those observations has a huge cost, etc. Using Weighted regression will do an extra effort to match that data.

  • Discount imprecision: OLS returns the maximum likelihood estimates when the residuals are independent, normal with mean 0 and with constant variance. When we face non constant variance OLS no longer returns the MLE. The logic behind using weighted regression is that makes no sense to pay equal attention to all the observations since some of them have higher variance and are less indicative of the conditional mean. We should put more emphasis on the regions of lower variance, predict it well and "expect to fit poorly where the noise is big".
    The weights that will return MLE are \(\frac{1}{\sigma_i^2}\)

  • Sampling bias: If we think or know that the observations in our data are not completely random and some subset of the population might be under-represented (in a survey for example or because of data availability) it might make sense to weight observation inversely to the probability of being included in the sample. Under-represented observations will get more weights and over-represented less weight.
    Another similar situation is related to covariate shift. If the distribution of variable x changes over time we can use a weight designed as the ratio of the probability density functions.

    "If the old pdf was p(x) and the new one is q(x), the weight we'd want to is \(w_i=q(x_i)/p(x_i)\)

  • Other: Related to GLM, when the conditional mean is a non linear function of a linear predictor. (Not further explained in the book at this point)

Is there a scenario where OLS is better than Weighted regression? Assuming we can compute the weights.

Example.

First we will see the impact of using weighted regression, using a simulated scenario where we actually know the variance of the error of each observation. This is not realistic but useful to see it in action.

library(tidyverse)

We generate 1000 datapoints with a linear relation between y and x. Intercept = 0, slope = 5. We let the variance of the error depend on the value of x. Higher values of x are associated with higher values of the variance of the error.

set.seed(23)
n=1000
x = runif(n,0,10)
error = rnorm(n,0, x/1.5)
df = data.frame(x)
df = df %>% mutate(y = 5*x + error)
Visually..

ggplot(data=df, aes(x=x, y=y)) +
  geom_point(alpha=0.3) + 
  # geom_smooth(color="blue") +
  # geom_smooth(method = "lm", mapping = aes(weight = (1/sqrt(x)^2)),
  #             color = "red", show.legend = FALSE) +
  NULL

Image

Linear regression
ols = lm(formula = "y~x", data=df)
summary(ols)

## 
## Call:
## lm(formula = "y~x", data = df)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -14.868  -1.720  -0.137   1.918  14.722 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  0.19192    0.24278   0.791    0.429    
## x            4.95585    0.04148 119.489   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 3.855 on 998 degrees of freedom
## Multiple R-squared:  0.9347, Adjusted R-squared:  0.9346 
## F-statistic: 1.428e+04 on 1 and 998 DF,  p-value: < 2.2e-16
We get an intercept of 0.19, non-significant when the actual value is 0 and a slope of 4.96 when the actual value is 5.

Weighted linear regression
wols = lm(formula = "y~x", data=df, weights = (1/sqrt(x)^2) )
summary(wols)
## 
## Call:
## lm(formula = "y~x", data = df, weights = (1/sqrt(x)^2))
## 
## Weighted Residuals:
##     Min      1Q  Median      3Q     Max 
## -4.8880 -0.8601 -0.0016  0.8936  4.6535 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 0.001483   0.030072   0.049    0.961    
## x           4.993473   0.021874 228.286   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.498 on 998 degrees of freedom
## Multiple R-squared:  0.9812, Adjusted R-squared:  0.9812 
## F-statistic: 5.211e+04 on 1 and 998 DF,  p-value: < 2.2e-16

We get an intercept of 0, non-significant too but much closer to 0 and with lower standard error and a slope of 4.99 also much closer to the actual value of 5 and with lower standard error.

Conclusion: if we know the right weights we can get better estimates from a linear regression in case of heteroscedasticity.

Inference is not valid in the dataset used for model selection.

Let's say we have a dataset and we want to fit a model to it and do some inference such as obtaining the coefficients and look for their confidence intervals.

For such a task we would first need to find a model that we think approximates to the real data generating process behind the phenomenon.
This will be the model selection step.
Then we would look at the output of our model and get the standard error of the coefficients or calculate the confidence interval or any other similar task. This will be the inference step.

The issue here is that, if we don't know the true model and we do model selection, our own model will be a random object. Why? Because the particular dataset we are using is also a set of random variables. Other datasets might return another model formula as the best between our options since that particular dataset would have other observations and particularities.

Main problem:

since we are selecting a model based on a particular dataset, the standard errors and p-values will be smaller than then actual ones.

"That means there is some extra randomness in your estimated parameters (and everything else), which isn't accounted for by formulas which assume a fixed model.
This is not just a problem with formal model-selection devices like cross-validation. If you do an initial, exploratory data analysis before deciding which model to use - and that's generally a good idea - you are, yourself, acting as a noisy, complicated model-selection device" (Sharizi 2017)

The most straightforward way to deal with this (if you are using independent observations) is to split the data, do model selection in one part and then fit the best model in the other part. Your second fit will be the one useful for inference.
You could fit the model to the full data but that would include the part used for model selection and you would still get false, overconfident standard errors.

Let's see an example.
We will generate data following a somewhat "complicated" model with interactions. We will split the data in two equal size parts. One for model selection and one for inference.
We will then fit a couple formulas to model selection part and pick the one with the minimum RMSE. We will compare the standard errors obtained in the model selection part and the ones obtained fitting that model to the inference part.

Thanks to BrunoRodrigues for this post that I used as guideline to fit models with Cross Validation in R.

We start by generating the data, including interactions.

set.seed(1)
N = 5000
b0 = 4
b1 = 1
b2 = 2
b3 = 3
b4 = 4
b5 = 5

x1 = runif(N, 0, 10)
x2 = rnorm(N, 20, 3)
x3 = runif(N, 20, 40)
error = rnorm(N, 0, 200)

y = b0 + b1*x1 + b2*x2 + b3*x3 + b4*x1*x2 + b5*x2*x3 + error


df = tibble(y, x1, x2, x3)

We do the first split, df_selection will be the one used to try different models and pick one.
df_inference will be used to do the actual inference given the model selected.

prop = 0.5

selection_inference_split = initial_split(df, prop=prop)

df_selection = training(selection_inference_split)
df_inference = testing(selection_inference_split)

To select a model using df_selection we will use Cross validation to try to get the model that best generalizes.
We will generate 30 split of 70% of the data and use the other 30% to calculate RMSE metric.

validation_data <- mc_cv(df_selection, prop = 0.7, times = 30)

We create two functions, my_lm() will run a linear regression for the training part of each split of CV and get the prediction for the testing part of each split. We will run this for a couple of formulas.
return_model will fit the model to the whole training data to extract the parameters and standard errors we get if we use the same dataset that was used to do model selection.

my_lm <- function(formula, split, id){

    analysis_set = analysis(split)  

    model <- lm(formula = formula, data=analysis_set)

    assessment_set <- assessment(split)


    pred = tibble::tibble("id" = id,
        "formula" = formula,
        "truth" = assessment_set$y,
        "prediction" = unlist(predict(model, newdata = assessment_set)))

}


return_model = function(formula){


    model <- lm(formula = formula, data=df_selection)
    output = list(model$coefficients, summary(model))

}

We will try 5 formulas. The first one is the actual data generating process and should the best in terms of RMSE. We will exclude that one for model selection since the aim of this is to simulate a scenario where we don't know the actual formula behind the data. We calculate it just for reference but we will pick one of the other 4 models for inference.

formulas = list("y ~ x1 + x2 +x3 + x1*x2 + x2*x3", 
                "y ~ .", 
                "y ~ x1 + x2", 
                "y ~ x1 + x2 + x3 + x1*x2",
                "y ~ x1 + x2 + x3 + x2*x3")
results = data.frame()

models = list()
for (formula in formulas){

results_selection <- map2_df(.x = validation_data$splits,
                           .y = validation_data$id,
                           ~my_lm(formula = formula, split = .x, id = .y))

model = return_model(formula)


results = rbind.data.frame(results, results_selection)
models = c(models, model )

}

We retrieve the mean RMSE across the splits, calculated in the test part of each split.
We can see that the real model is the best in terms of RMSE. Between the others, we can see that the one including the x2:x3 interaction is the best. So, we will keep that one as our "model selected"

results %>%
    group_by(id, formula) %>%
    rmse(truth, prediction) %>%
    group_by(formula) %>%
    summarise(mean_rmse = mean(.estimate)) %>%
    as.data.frame()
##                           formula mean_rmse
## 1                           y ~ .  219.4756
## 2                     y ~ x1 + x2  625.0173
## 3        y ~ x1 + x2 + x3 + x1*x2  217.3185
## 4        y ~ x1 + x2 + x3 + x2*x3  198.9802
## 5 y ~ x1 + x2 +x3 + x1*x2 + x2*x3  196.4747

We can check the parameters and the standard errors when fitted to the whole selection dataset.

## 
## Call:
## lm(formula = formula, data = df_selection)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -854.07 -132.91   -0.97  137.65  714.53 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)    
## (Intercept) -245.1084   138.9684  -1.764   0.0779 .  
## x1            82.7919     1.3540  61.148   <2e-16 ***
## x2            15.7118     6.8628   2.289   0.0221 *  
## x3            -1.5177     4.5626  -0.333   0.7394    
## x2:x3          5.1676     0.2256  22.906   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 199.1 on 2495 degrees of freedom
## Multiple R-squared:  0.947,  Adjusted R-squared:  0.9469 
## F-statistic: 1.115e+04 on 4 and 2495 DF,  p-value: < 2.2e-16

And let's see what happens if we fit the same model to the inference set.

model_test = lm(formula=formulas[[5]], data=df_inference)

summary(model_test)

## 
## Call:
## lm(formula = formulas[[5]], data = df_inference)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -656.47 -138.67   -5.64  130.21  773.99 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)    
## (Intercept) -438.7059   140.2618  -3.128 0.001782 ** 
## x1            81.4724     1.3622  59.812  < 2e-16 ***
## x2            23.4856     6.9475   3.380 0.000735 ***
## x3             3.6309     4.5942   0.790 0.429417    
## x2:x3          4.9750     0.2275  21.869  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 199.4 on 2495 degrees of freedom
## Multiple R-squared:  0.9473, Adjusted R-squared:  0.9472 
## F-statistic: 1.121e+04 on 4 and 2495 DF,  p-value: < 2.2e-16
First we can see that the parameters have changed a bit.
In second place we can see that the standard errors are generally bigger in comparison to the parameter for the inference set and will generate a wider confidence interval.

ggplot(data = ratio_df) +
  geom_point(aes(x=parameter, y=ratio, col=set), size=3) +
  theme(legend.title = element_blank()) +
  theme_light() +
  xlab("") +
  ylab("Ratio") +
  ggtitle("Absolute ratio between SD and Estimate")

Image

My idea is to add a plot with the confidence intervals so the effect can be seen directly but I don't have the time today. Anyways, it is clear that the standad error to parameter ratio is bigger in the inference set, showing that the inference in the same dataset as model selection is invalid as it is overconfident in the results.

Remarks on R2

R2 depends on the variance on the variance of the predictors

Quoting from Shalizi[^1] Assuming a true linear model
$$ Y = aX + \epsilon$$
and assuming we know \(a\) exactly.
The variance of Y will be \(a^2\mathbb{V}[X] + \mathbb{V}[\epsilon]\).
So \(R^2 = \frac{a^2\mathbb{V}[X]}{a^2\mathbb{V}[X] + \mathbb{V}[\epsilon]}\)
This goes to 0 as \(\mathbb{V}[X] \rightarrow 0\) and it goes to 1 as \(\mathbb{V}[X] \rightarrow \infty\). "It thus has little to do with the quality of the fit, and a lot to do with how spread out the predictor variable is. Notice also how easy it is to get a high \(R^2\) even when the true model is not linear!"

Below a quick comparison between two linear relationships, one with much higher variance than the other in the predictor.
Added a different constant for better display in plot.

library(tidyverse)

x1 = rnorm(1000, mean=0, sd=1)
x2 = rnorm(1000, mean=0, sd=10)
error = rnorm(1000, mean=0, sd=0.5)

y1 = x1 + error
y2 = 10 + x2 +  error

df = data.frame(x1,x2,y1,y2)

model1 = lm("y1 ~ x1")
## Error in eval(predvars, data, env): object 'y1' not found
model2 =  lm("y2 ~ x2")
## Error in eval(predvars, data, env): object 'y2' not found

Linear Smoothers

Linear regression as smoothing

Let's assume the DGP (data generating process) is: $$ Y = \mu(x) + \epsilon$$ where \(\mu(x)\) is the mean Y value for that particular x and \(\epsilon\) is an error with mean 0.

When running OLS we are trying to approximate \(\mu(x)\) with a linear function of the form \(\alpha + \beta x\) and trying to retrieve the best \(\alpha\) and \(\beta\) minimizing the mean-squared error.

The conclusions don't change but the math gets easier if we assume both X and Y are centered (mean=0).
With that in mind we can write down the MSE and optimize to get the best parameters.

\[MSE(\alpha, \beta) = \mathbb{E}[(Y - \alpha - \beta X)^2] \\ = \mathbb{E}[\mathbb{E}[(Y - \alpha - \beta X)^2 | X]] \\ = \mathbb{E}[\mathbb{V}[Y|X]] + \mathbb{E}[Y- \alpha - \beta X | X])^2] \\ = \mathbb{E}[\mathbb{V}[Y|X]] + \mathbb{E}[(\mathbb{E}[Y- \alpha - \beta X | X])^2]\]

Deriving with respect to \(\alpha\) and \(\beta\) for optimization..
The first term can be dropped since doesn't include any parameter.

$$\frac{\partial MSE}{\partial \alpha} = \mathbb{E}[2(Y - \alpha - \beta X)(-1)] \ \mathbb{E}[Y - a - b X] = 0 \ a = \mathbb{E}[Y] - b \mathbb{E}[X] = 0 $$ when Y and X are centered..

and $$\frac{\partial MSE}{\partial \beta} = \mathbb{E}[2(Y - \alpha - \beta X)(-X)] \ \mathbb{E}[XY] - b\mathbb{E}[X^2] = 0 \ b = \frac{Cov[X,Y]}{\mathbb{V}[X]} $$

The optimal beta is a function of the covariance between Y and X, and the variance of X.

Putting together \(a\) and \(b\) we get \(\mu(x) = x \frac{Cov[X,Y]}{\mathbb{V}[X]}\)

Replacing with the values from the sampled data we get an estimation of \(a\) and \(b\).

Remember they are 0 centered so variance and covariance get simplified.

\[ \hat a = 0 \\ \hat b = \frac{\sum_i y_i x_i}{\sum_i x_i^2}\]

With all this we can see how OLS is a smoothing of the data.
Writing in terms of the data points:
$$\hat \mu(x) = \hat b x \ = x \frac{\sum_i y_i x_i}{\sum_i x_i^2} \ = \sum_i y_i \frac{x_i}{\sum_j x_j^2} x \ = \sum_i y_i \frac{x_i}{n \hat \sigma_x^2} x $$ where \(\hat \sigma_x^2\) is the sample variance of X.
In words, our prediction is a weighted average of the observed values \(y_i\) of the dependent variable, where the weights are proportional to how far \(x_i\) is from the center (relative to the variance), and proportional to the magnitude of \(x\). If \(x_i\) is on the same side of the center as \(x\), it gets a positive weight, and if it's on the opposite side it gets a negative weight. (Shalizi 2017)

If \(\mu(x)\) is really a straight line, this is fine, but when it's not, that the weights are proportional to how far they are to the center and not the point to predict can lead to awful predictions.

Alternative smoothers

For that, other methods smooth the data in another ways to help mitigate that.

As quick examples, we have KNN regression where the smoothing is done using only close observations to the one to predict (and getting quite noisy since depend a lot on the sample points around a small area).

Kernel smoothers are a variant where depending on the kernel selected we get different smoothing. The main idea is that we use a windowd of data with the idea of putting more weight to points close to the one to predict. Could be Gaussian weight around X for example, or uniform around a window. Note this is different than KNN regression since we do not take the average of those points, we get a regression for that area.
A nice thing about this smoothers (and KNN regression) is that if we want to predict points far from the training data we won't get a linear extrapolation as with OLS but it will be pushed towards the closest data points we had in training.