Let A be the n×n matrix with ijth entry Aij=min{i,j}. From a previous post, we know A has a tridiagonal inverse A−1 with ijth entry1 [A−1]ij=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩2if i=j<n1if i=j=n−1if |i−j|=10otherwise. For example, if n=4 then A=⎡⎢ ⎢ ⎢⎣1111122212331234⎤⎥ ⎥ ⎥⎦ has inverse A−1=⎡⎢ ⎢ ⎢⎣2−100−12−100−12−100−11⎤⎥ ⎥ ⎥⎦
We can use our knowledge of A−1 to eigendecompose A. To see how, let {(λj,vj)}nj=1 be the eigenpairs of A−1. Yueh (2005, Theorem 1) shows that the eigenvector vj∈Rn corresponding to the jth eigenvalue λj=2(1+cos(2jπ2n+1)) has ith component [vj]i=αsin(2ijπ2n+1), where α∈R is an arbitrary scalar. This vector has length ||vj||≡ ⎷n∑i=1([vj]i)2= ⎷n∑i=1α2sin2(2ijπ2n+1)=|α|√2n+14, where the last equality can be verified using Wolfram Alpha and proved using complex analysis. So choosing α=2/√2n+1 ensures that the eigenvectors v1,v2,…,vn of A−1 have unit length. Then, by the spectral theorem, these vectors form an orthonormal basis for Rn. As a result, the n×n matrix V=[v1v2⋯vn] with ijth entry Vij=[vj]i is orthogonal. Moreover, letting Λ be the n×n diagonal matrix with iith entry Λii=λi yields the eigendecomposition A−1=VΛVT=n∑j=1λjvjvTj of A−1. It follows from the orthogonality of V that A=(VΛVT)−1=VΛ−1VT=n∑j=11λjvjvTj is the eigendecomposition of A. Thus A and A−1 have the same eigenvectors, but the eigenvalues of A are the reciprocated eigenvalues of A−1.
Here’s one scenario in which this decomposition is useful: Suppose I observe data D={(xi,yi)}ni=1 generated by the process yi=f(xi)+εiεiiid∼N(0,σ2ε), where {f(x)}x≥0 is a sample path of a standard Wiener process and where the errors εi are iid normally distributed with variance σ2ε. I use these data to estimate f(x) for some x≥0.2 My estimator ^f(x)≡E[f(x)∣D] has conditional variance Var(^f(x)∣D)=Var(f(x))−wTΣ−1w, where w∈Rn is the vector with ith component wi=Cov(yi,f(x)) and where Σ∈Rn×n is the covariance matrix with ijth entry Σij=Cov(yi,yj). If xi=i for each i∈{1,2,…,n}, then we can express this matrix as the sum Σ=A+σ2εI, where A is the n×n matrix defined above and where I is the n×n identity matrix. But we know A=VΛ−1VT. We also know I=VVT, since V is orthogonal. It follows that Σ−1=(VΛ−1VT+σ2εVVT)−1=V(Λ+1σ2εI)VT, from which we can derive a (relatively) closed-form expression for the conditional variance of ^f(x) given D.
-
One can verify this claim by showing AA−1 equals the identity matrix. ↩︎
-
I discuss this estimation problem in a recent paper. ↩︎