Suppose I want to learn the value of a parameter θR. My prior is that θ is normally distributed with variance σ02. I observe n1 signals si=θ+εi of θ. The errors εi in these signals are independent of θ. They are jointly normally distributed with equal variances Var(εi)=σ2 and pairwise correlations Cor(εi,εj)={1if i=jρotherwise. I assume 1/(n1)ρ1 so that this distribution is feasible.1

Observing s1,s2,,sn is the same to observing the sample mean s¯n1ni=1nsi, which is normally distributed and has conditional variance Var(s¯nθ)=(1+(n1))ρσ2n under my prior. The posterior distribution of θ given s¯n is also normal and has variance Var(θs¯n)=(1σ02+n(1+(n1)ρ)σ2)1. Both variances are (i) decreasing in n when ρ<1 and (ii) increasing in ρ when n>1. Intuitively, if the signals are not perfectly correlated then observing more gives me more information about θ. If they are negatively correlated then their errors “cancel out” and the sample mean s¯n gives me a precise estimate of θ.

The chart below shows how Var(s¯nθ) and Var(θs¯n) vary with ρ and n when σ0=σ=1. If ρ=1/(n1) then ε1+ε2++εn=0, and so Var(s¯nθ)=0 and Var(θs¯n)=0 because s¯n=θ. Whereas if ρ=1 then signals s2 through sn provide the same information as s1, and so Var(s¯nθ)=Var(s1θ) and Var(θs¯n)=Var(θs1).


  1. For example, it is impossible for three normal variables to have equal variances and pairwise correlations of 1. See here for an explanation. ↩︎