Suppose I want to learn the value of ω{0,1}. I observe a sequence of iid signals (sn)n1 with Pr(sn=0|ω=0)=1α and Pr(sn=1|ω=1)=1β, where α and β are false positive and false negative rates. I let πn denote my belief that ω=1 after observing n signals, and update this belief sequentially via Bayes’ formula: πn(s)=Pr(sn=s|ω=1)πn1Pr(sn=s). In particular, if I observe sn=0 then I update my belief to πn(0)=βπn1βπn1+(1α)(1πn1), whereas if I observe sn=1 then I update my belief to πn(1)=(1β)πn1(1β)πn1+α(1πn1).

The chart below shows how my belief πn changes with n. Each path in the chart corresponds to the sequence of beliefs (π0,π1,,π100) obtained by updating my initial belief π0=0.5 in response to a signal sequence (s1,s2,,s100). I simulate 10 such sequences, fixing ω=1 and α=0.4 but varying β{0.2,0.4,0.6,0.8}.

If β0.6 then my belief converges to πn=1 as n grows. However, if β=0.6 then πn=π0 for each n; that is, I never update my beliefs regardless of the signals I observe. This is because if α+β=1 then Pr(sn=sω=1)=Pr(sn=s) for each s{0,1}, and so signals are uninformative because they are independent of ω.

The chart below plots the mean of my beliefs πn across 1,000 realizations of the signals simulated above. Again, I fix ω=1 and the false positive rate α=0.4 but vary the false negative rate β{0.2,0.4,0.6,0.8}. Higher values of β are not always worse: my belief converges to the truth faster when β=0.8 than when β=0.4. Intuitively, if I know the false negative rate is close to 100% then observing a signal sn=0 gives me strong evidence that ω=1.