Let 0<x1<x2<…<xn and let A be the symmetric n×n matrix with ijth entry Aij=min{xi,xj}.
This matrix has linearly independent columns and so is invertible.
Its inverse A−1 is symmetric, tridiagonal, and has ijth entry
[A−1]ij=⎧⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪⎨⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪⎩1x1+1x2−x1if i=j=11xi−xi−1+1xi+1−xiif 1<i=j<n1xn−xn−1if i=j=n−1xj−xiif i=j−1−1xi−xjif i=j+10otherwise.
For example, if xi=2i−1 for each i≤n=5 then
A=⎡⎢
⎢
⎢
⎢
⎢
⎢⎣11111122221244412488124816⎤⎥
⎥
⎥
⎥
⎥
⎥⎦
and
A−1=⎡⎢
⎢
⎢
⎢
⎢
⎢⎣2−1000−11.5−0.5000−0.50.75−0.25000−0.250.375−0.125000−0.1250.125⎤⎥
⎥
⎥
⎥
⎥
⎥⎦
You may wonder: why is this useful?
Suppose I observe data {(xi,yi)}ni=1, where the function f:[0,∞)→R mapping regressors xi≥0 to outcomes yi=f(xi) is the realization of a Wiener process.
I use these data to estimate some value f(x) via Bayesian regression.
My estimate depends on the inverse of the covariance matrix for the outcome vector y=(y1,y2,…,yn).
This matrix has ijth entry min{xi,xj}, so I can compute its inverse using the expression above.