Here is another, more indirect, but I believe interesting one, namely the connection between different approaches to computing the partial autocorrelation coefficient of a stationary time series.
Definition 1
Consider the projection
Y^t−μ=α(m)1(Yt−1−μ)+α(m)2(Yt−2−μ)+…+α(m)m(Yt−m−μ)
The
mth
partial autocorrelation equals
α(m)m.
It thus gives the influence of the mth lag on Yt \emph{after controlling for} Yt−1,…,Yt−m+1. Contrast this with ρm, that gives the `raw' correlation of Yt and Yt−m.
How do we find the α(m)j? Recall that a fundamental property of a regression of Zt on regressors Xt is that the coefficients are such that regressors and residuals are uncorrelated. In a population regression this condition is then stated in terms of population correlations. Then:
E[Xt(Zt−X⊤tα(m))]=0
Solving for
α(m) we find the
linear projection coefficients
α(m)=[E(XtX⊤t)]−1E[XtZt]
Applying this formula to
Zt=Yt−μ and
Xt=[(Yt−1−μ),(Yt−2−μ),…,(Yt−m−μ)]⊤
we have
E(XtX⊤t)=⎛⎝⎜⎜⎜⎜⎜γ0γ1⋮γm−1γ1γ0⋮γm−2⋯⋯⋱⋯γm−1γm−2⋮γ0⎞⎠⎟⎟⎟⎟⎟
Also,
E(XtZt)=⎛⎝⎜⎜γ1⋮γm⎞⎠⎟⎟
Hence,
α(m)=⎛⎝⎜⎜⎜⎜⎜γ0γ1⋮γm−1γ1γ0⋮γm−2⋯⋯⋱⋯γm−1γm−2⋮γ0⎞⎠⎟⎟⎟⎟⎟−1⎛⎝⎜⎜γ1⋮γm⎞⎠⎟⎟
The
mth partial correlation then is the last element of the vector
α(m).
So, we sort of run a multiple regression and find one coefficient of interest while controlling for the others.
Definition 2
The mth partial correlation is the correlation of the prediction error of Yt+m predicted with Yt−1,…,Yt−m+1 with the prediction error of Yt predicted with Yt−1,…,Yt−m+1.
So, we sort of first control for the intermediate lags and then compute the correlation of the residuals.