LinkedIn Series
All posts in one place. Read in order or jump to what you need.
Overview & How to Use
This page collects my LinkedIn series in a playlist-style format. Each series has a short description and a numbered list of posts with direct links to LinkedIn. Start at Week 1 to follow the full narrative, or skim the summaries and jump into a topic that matches your needs.
State Space Models
PlaylistWeek 1 · What is a State Space Model?
August 13, 2025
Introduces state space models as a way to separate signal from noise, combining interpretable components (trend, seasonalities, cycles, regressors) with Kalman filtering for estimation.
Week 2 · Why use State Space Models instead of ARIMA?
August 20, 2025
Compares SSMs with ARIMA, highlighting SSM strengths for trends, breaks, and missing values and notes that ARIMA itself can be written in state space form.
Week 3 · The Building Blocks: Trend, Seasonality, Cycle, Irregulars
August 27, 2025
Shows how SSMs are built like Lego: combine trend, seasonalities, cycles, and regressors to decompose and explain complex time series.
Week 4 · The Art of Optimal Updating: The Kalman Filter
September 3, 2025
Explains the Kalman filter’s prediction–update logic and why it yields optimal state estimates given uncertainty in data and model.
Week 5 · How Hindsight Improves Estimation: The Kalman Smoother
September 10, 2025
Shows how the smoother uses the full dataset to refine past state estimates, revealing clearer trends and breaks.
Week 6 · Missing Data? No Problem: The Kalman Filter
September 17, 2025
Demonstrates how the filter naturally handles gaps by propagating the model forward, with uncertainty widening until data resumes.
Week 7 · Forecasting with State Space Models
September 24, 2025
Covers multi-step forecasting by treating future points as missing, inspecting component-wise forecasts and uncertainty.
Week 8 · From regression to state space: adding covariates the smart way
October 1, 2025
Explains including covariates as deterministic regressors (variance set to zero) so betas are interpretable with standard errors equal to recursive least squares.
Week 9 · Predicted, Filtered, and Smoothed Errors
October 8, 2025
Defines the three residual types and why one-step prediction errors are the right diagnostic for model adequacy (ACF flatness, zero mean, homoskedasticity).
Week 10 · From prediction errors to learning: how the Kalman filter uses likelihood
October 15, 2025
Connects prediction errors and their variance to the likelihood; shows how maximizing likelihood tunes parameters (e.g., signal-to-noise ratio) for the best fit.
Missing a post? Let me know and I’ll add it.