Let A be the n×n matrix with ijth entry Aij=min{i,j}. From a previous post, we know A has a tridiagonal inverse A1 with ijth entry1 [A1]ij={2if i=j<n1if i=j=n1if |ij|=10otherwise. For example, if n=4 then A=[1111122212331234] has inverse A1=[2100121001210011]

We can use our knowledge of A1 to eigendecompose A. To see how, let {(λj,vj)}j=1n be the eigenpairs of A1. Yueh (2005, Theorem 1) shows that the eigenvector vjRn corresponding to the jth eigenvalue λj=2(1+cos(2jπ2n+1)) has ith component [vj]i=αsin(2ijπ2n+1), where αR is an arbitrary scalar. This vector has length ||vj||i=1n([vj]i)2=i=1nα2sin2(2ijπ2n+1)=|α|2n+14, where the last equality can be verified using Wolfram Alpha and proved using complex analysis. So choosing α=2/2n+1 ensures that the eigenvectors v1,v2,,vn of A1 have unit length. Then, by the spectral theorem, these vectors form an orthonormal basis for Rn. As a result, the n×n matrix V=[v1v2vn] with ijth entry Vij=[vj]i is orthogonal. Moreover, letting Λ be the n×n diagonal matrix with iith entry Λii=λi yields the eigendecomposition A1=VΛVT=j=1nλjvjvjT of A1. It follows from the orthogonality of V that A=(VΛVT)1=VΛ1VT=j=1n1λjvjvjT is the eigendecomposition of A. Thus A and A1 have the same eigenvectors, but the eigenvalues of A are the reciprocated eigenvalues of A1.

Here’s one scenario in which this decomposition is useful: Suppose I observe data D={(xi,yi)}i=1n generated by the process yi=f(xi)+εiεiiidN(0,σε2), where {f(x)}x0 is a sample path of a standard Wiener process and where the errors εi are iid normally distributed with variance σε2. I use these data to estimate f(x) for some x0.2 My estimator f^(x)E[f(x)D] has conditional variance Var(f^(x)D)=Var(f(x))wTΣ1w, where wRn is the vector with ith component wi=Cov(yi,f(x)) and where ΣRn×n is the covariance matrix with ijth entry Σij=Cov(yi,yj). If xi=i for each i{1,2,,n}, then we can express this matrix as the sum Σ=A+σε2I, where A is the n×n matrix defined above and where I is the n×n identity matrix. But we know A=VΛ1VT. We also know I=VVT, since V is orthogonal. It follows that Σ1=(VΛ1VT+σε2VVT)1=V(Λ+1σε2I)VT, from which we can derive a (relatively) closed-form expression for the conditional variance of f^(x) given D.


  1. One can verify this claim by showing AA1 equals the identity matrix. ↩︎

  2. I discuss this estimation problem in a recent paper. ↩︎