Suppose my payoff from taking an action depends on an unknown state .1 I can learn about by collecting data , where the observations are iid normally distributed with mean and variance :2 I use these data, my prior belief and Bayes’ rule to form a posterior belief where is the precision of my prior, is the precision of the , and is their arithmetic mean. Then my expected payoff from taking action equals I maximize this payoff by choosing . This yields expected payoff which is increasing in . Intuitively, collecting more data makes me more informed and makes my optimal action more likely to be “correct.” But data are costly: I have to pay to collect observations, where captures the marginal cost of information.3 I choose to maximize my total payoff which has maximizer If then because the cost of collecting any data isn’t worth the variance reduction they deliver. Whereas if then is strictly positive and gives me total payoff Both and are decreasing in . Intuitively, making the data more expensive makes me want to collect less, leaving me less informed and worse off. In contrast, making my prior more precise (i.e., increasing ) makes me want to collect less data but leaves me better off. This is because being well-informed means I can pay for less data and still be well-informed.
Curiously, making the more precise (i.e., increasing ) makes me want to collect more data but does not change my welfare. This is because the cost of each observation scales with its precision. This cost exactly offsets the value of the information gained, leaving my total payoff unchanged.
-
See here for my discussion of the case when the state and data are binary. ↩︎
-
This is the same as letting with iid errors . ↩︎
-
Pomatto et al. (2023) show that this cost function (uniquely) satisfies some attractive properties. Linear cost functions also appear in many sequential sampling problems (see, e.g., Wald’s (1945) classic model or Morris and Strack’s (2019) discussion of it) and their continuous-time analogues (see, e.g., Fudenberg et al. (2018) or Liang et al. (2022)). ↩︎