Ben Davies
https://bldavies.com/
Recent content on Ben DaviesHugo -- gohugo.ioMon, 08 Jan 2024 00:00:00 +0000Learning about a changing state
https://bldavies.com/blog/learning-changing-state/
Mon, 08 Jan 2024 00:00:00 +0000https://bldavies.com/blog/learning-changing-state/<p>I have a <a href="https://arxiv.org/abs/2401.03607">new paper</a> on Bayesian learning.
It extends my model of <a href="https://bldavies.com/blog/paying-precision/">paying for precision</a> to a setting where the unknown state changes over time.
This makes the agent keep buying new information as his existing information becomes out of date.
I show how his demand for information depends on whether he is myopic or forward-looking, and on the <a href="https://en.wikipedia.org/wiki/Gaussian_process">Gaussian process</a> defining how the state evolves.</p>
<p>The paper stems from my research with <a href="https://kingcenter.stanford.edu/anirudh-sankar">Anirudh Sankar</a> on how people learn across contexts.
Suppose I ask you for advice, and you say “X worked for me.”
But will X work for me?
We’re different people with different contexts (e.g., physical and social positions).
Our outcomes might be different.</p>
<p>Imagine there’s a function mapping contexts to outcomes.
If I know this function then I can invert it, taking information generated in your context and porting it into mine.
But if I don’t know the function then I can’t invert it, which makes learning from you hard.
Anirudh and my research formalizes this idea: the more I know about the function mapping contexts to outcomes, the easier it is to learn across contexts.</p>
<p>Mathematically, learning across contexts is like learning across time: the function mapping contexts to outcomes is like a stochastic process mapping times to states.
But contexts, unlike time, can have many dimensions and may not be <a href="https://en.wikipedia.org/wiki/Total_order">totally orderable</a>.
Contexts are more general, and so models of learning across them can lead to more general insights.
I hope to share some of those insights in the future.</p>
Learning from correlated signals
https://bldavies.com/blog/learning-correlated-signals/
Fri, 24 Nov 2023 00:00:00 +0000https://bldavies.com/blog/learning-correlated-signals/<p>Suppose I want to learn the value of a parameter <code>\(\theta\in\mathbb{R}\)</code>.
My prior is that <code>\(\theta\)</code> is normally distributed with variance <code>\(\sigma_0^2\)</code>.
I observe <code>\(n\ge1\)</code> signals
<code>$$\DeclareMathOperator{\Cor}{Cor} \DeclareMathOperator{\E}{E} \DeclareMathOperator{\Var}{Var} \newcommand{\R}{\mathbb{R}} \renewcommand{\epsilon}{\varepsilon} s_i=\theta+\epsilon_i$$</code>
of <code>\(\theta\)</code>.
The errors <code>\(\epsilon_i\)</code> in these signals are independent of <code>\(\theta\)</code>.
They are jointly normally distributed with equal variances <code>\(\Var(\epsilon_i)=\sigma^2\)</code> and pairwise correlations
<code>$$\Cor(\epsilon_i,\epsilon_j)=\begin{cases} 1 & \text{if}\ i=j \\ \rho & \text{otherwise}. \end{cases}$$</code>
I assume <code>\(-1/(n-1)\le\rho\le1\)</code> so that this distribution is feasible.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup></p>
<p>Observing <code>\(s_1,s_2,\ldots,s_n\)</code> is the same to observing the sample mean
<code>$$\bar{s}_n\equiv\frac{1}{n}\sum_{i=1}^ns_i,$$</code>
which is normally distributed and has conditional variance
<code>$$\Var(\bar{s}_n\mid\theta)=\frac{(1+(n-1))\rho\sigma^2}{n}$$</code>
under my prior.
The posterior distribution of <code>\(\theta\)</code> given <code>\(\bar{s}_n\)</code> is also normal and has variance
<code>$$\Var(\theta\mid\bar{s}_n)=\left(\frac{1}{\sigma_0^2}+\frac{n}{(1+(n-1)\rho)\sigma^2}\right)^{-1}.$$</code>
Both variances are
(i) decreasing in <code>\(n\)</code> when <code>\(\rho<1\)</code> and
(ii) increasing in <code>\(\rho\)</code> when <code>\(n>1\)</code>.
Intuitively, if the signals are not perfectly correlated then observing more gives me more information about <code>\(\theta\)</code>.
If they are negatively correlated then their errors “cancel out” and the sample mean <code>\(\bar{s}_n\)</code> gives me a precise estimate of <code>\(\theta\)</code>.</p>
<p>The chart below shows how <code>\(\Var(\bar{s}_n\mid\theta)\)</code> and <code>\(\Var(\theta\mid\bar{s}_n)\)</code> vary with <code>\(\rho\)</code> and <code>\(n\)</code> when <code>\(\sigma_0=\sigma=1\)</code>.
If <code>\(\rho=-1/(n-1)\)</code> then <code>\(\epsilon_1+\epsilon_2+\cdots+\epsilon_n=0\)</code>, and so <code>\(\Var(\bar{s}_n\mid\theta)=0\)</code> and <code>\(\Var(\theta\mid\bar{s}_n)=0\)</code> because <code>\(\bar{s}_n=\theta\)</code>.
Whereas if <code>\(\rho=1\)</code> then signals <code>\(s_2\)</code> through <code>\(s_n\)</code> provide the same information as <code>\(s_1\)</code>, and so <code>\(\Var(\bar{s}_n\mid\theta)=\Var(s_1\mid\theta)\)</code> and <code>\(\Var(\theta\mid\bar{s}_n)=\Var(\theta\mid s_1)\)</code>.</p>
<p><img src="figures/variances-1.svg" alt=""></p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>For example, it is impossible for three normal variables to have equal variances and pairwise correlations of <code>\(-1\)</code>.
See <a href="https://bldavies.com/blog/transitivity-positive-correlations/">here</a> for an explanation. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Simulating Wiener and Ornstein-Uhlenbeck processes
https://bldavies.com/blog/simulating-wiener-ornstein-uhlenbeck-processes/
Tue, 29 Aug 2023 00:00:00 +0000https://bldavies.com/blog/simulating-wiener-ornstein-uhlenbeck-processes/<p>A (standard) <a href="https://en.wikipedia.org/wiki/Wiener_process">Wiener process</a> is a continuous-time stochastic process <code>\(\{W(t)\}_{t\ge0}\)</code> with initial value <code>\(W(0)=0\)</code> and instantaneous increments
<code>$$\newcommand{\der}{\mathrm{d}} \der W(t)\sim N(0,\der t).$$</code>
We can simulate such a process as follows.
First, create a sequence of times <code>\(t\)</code> at which to store the value of <code>\(W(t)\)</code>:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">t_max</span> <span class="o">=</span> <span class="m">100</span>
<span class="n">dt</span> <span class="o">=</span> <span class="m">1e-2</span>
<span class="n">t</span> <span class="o">=</span> <span class="nf">seq</span><span class="p">(</span><span class="m">0</span><span class="p">,</span> <span class="n">t_max</span><span class="p">,</span> <span class="n">by</span> <span class="o">=</span> <span class="n">dt</span><span class="p">)</span>
</code></pre></div><p>Increasing <code>t_max</code> creates a longer path, while decreasing <code>dt</code> creates a smoother path.
Now simulate the random increments and take their cumulative sum:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">dW</span> <span class="o">=</span> <span class="nf">rnorm</span><span class="p">(</span><span class="nf">length</span><span class="p">(</span><span class="n">t</span><span class="p">)</span> <span class="o">-</span> <span class="m">1</span><span class="p">,</span> <span class="n">mean</span> <span class="o">=</span> <span class="m">0</span><span class="p">,</span> <span class="n">sd</span> <span class="o">=</span> <span class="nf">sqrt</span><span class="p">(</span><span class="n">dt</span><span class="p">))</span>
<span class="n">W</span> <span class="o">=</span> <span class="nf">c</span><span class="p">(</span><span class="m">0</span><span class="p">,</span> <span class="nf">cumsum</span><span class="p">(</span><span class="n">dW</span><span class="p">))</span>
</code></pre></div><p>Here are three sample paths generated by this procedure:</p>
<p><img src="figures/wiener-paths-1.svg" alt=""></p>
<p>We can use <code>\(\{W(t)\}_{t\ge0}\)</code> to construct an <a href="https://en.wikipedia.org/wiki/Ornstein%E2%80%93Uhlenbeck_process">Ornstein-Uhlenbeck process</a> <code>\(\{X(t)\}_{t\ge0}\)</code>.
This process has instantaneous increments
<code>$$\der X(t)=-\theta X(t)\der t+\der W(t),$$</code>
where <code>\(\theta\ge0\)</code> controls the process’ tendency to <a href="https://en.wikipedia.org/wiki/Regression_toward_the_mean">mean-revert</a>.
We can compute its values <code>\(X(t)\)</code> by iterating over <code>dW</code>:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">theta</span> <span class="o">=</span> <span class="m">1</span>
<span class="n">X</span> <span class="o">=</span> <span class="nf">rep</span><span class="p">(</span><span class="m">0</span><span class="p">,</span> <span class="nf">length</span><span class="p">(</span><span class="n">dW</span><span class="p">))</span>
<span class="n">i</span> <span class="o">=</span> <span class="m">1</span>
<span class="nf">while </span><span class="p">(</span><span class="n">i</span> <span class="o"><</span> <span class="nf">length</span><span class="p">(</span><span class="n">dW</span><span class="p">))</span> <span class="p">{</span>
<span class="n">X[i</span> <span class="o">+</span> <span class="m">1</span><span class="n">]</span> <span class="o">=</span> <span class="n">X[i]</span> <span class="o">-</span> <span class="n">theta</span> <span class="o">*</span> <span class="n">X[i]</span> <span class="o">*</span> <span class="n">dt</span> <span class="o">+</span> <span class="n">dW[i</span> <span class="o">+</span> <span class="m">1</span><span class="n">]</span>
<span class="n">i</span> <span class="o">=</span> <span class="n">i</span> <span class="o">+</span> <span class="m">1</span>
<span class="p">}</span>
</code></pre></div><p>The chart below compares the sample paths obtained using different <code>\(\theta\)</code> values.
Each path uses the same realization of the underlying Wiener process <code>\(\{W(t)\}_{t\ge0}\)</code>.
If <code>\(\theta=0\)</code> then <code>\(X(t)=W(t)\)</code> for all <code>\(t\ge0\)</code>.
The mean magnitude of <code>\(X(t)\)</code> falls as <code>\(\theta\)</code> rises because this makes the process more mean-reverting.</p>
<p><img src="figures/orstein-uhlenbeck-paths-1.svg" alt=""></p>
Inverting matrices of pairwise minima
https://bldavies.com/blog/inverting-matrices-pairwise-minima/
Sun, 20 Aug 2023 00:00:00 +0000https://bldavies.com/blog/inverting-matrices-pairwise-minima/<p>Let <code>\(0<x_1<x_2<\ldots<x_n\)</code> and let <code>\(A\)</code> be the symmetric <code>\(n\times n\)</code> matrix with <code>\({ij}^\text{th}\)</code> entry <code>\(A_{ij}=\min\{x_i,x_j\}\)</code>.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>
This matrix has linearly independent columns and so is invertible.
Its inverse <code>\(A^{-1}\)</code> is symmetric, <a href="https://en.wikipedia.org/wiki/Tridiagonal_matrix">tridiagonal</a>, and has <code>\({ij}^\text{th}\)</code> entry
<code>$$[A^{-1}]_{ij}=\begin{cases} \frac{1}{x_1}+\frac{1}{x_2-x_1} & \text{if}\ i=j=1 \\ \frac{1}{x_i-x_{i-1}}+\frac{1}{x_{i+1}-x_i} & \text{if}\ 1<i=j<n \\ \frac{1}{x_n-x_{n-1}} & \text{if}\ i=j=n \\ -\frac{1}{x_j-x_i} & \text{if}\ i=j-1 \\ -\frac{1}{x_i-x_j} & \text{if}\ i=j+1 \\ 0 & \text{otherwise}. \end{cases}$$</code>
For example, if <code>\(x_i=2^{i-1}\)</code> for each <code>\(i\le n=5\)</code> then
<code>$$A=\begin{bmatrix} 1 & 1 & 1 & 1 & 1 \\ 1 & 2 & 2 & 2 & 2 \\ 1 & 2 & 4 & 4 & 4 \\ 1 & 2 & 4 & 8 & 8 \\ 1 & 2 & 4 & 8 & 16 \\ \end{bmatrix}$$</code>
and
<code>$$A^{-1}=\begin{bmatrix} 2 & -1 & 0 & 0 & 0 \\ -1 & 1.5 & -0.5 & 0 & 0 \\ 0 & -0.5 & 0.75 & -0.25 & 0 \\ 0 & 0 & -0.25 & 0.375 & -0.125 \\ 0 & 0 & 0 & -0.125 & 0.125 \\ \end{bmatrix}$$</code>
You may wonder: why is this useful?
Suppose I observe data <code>\(\{(x_i,y_i)\}_{i=1}^n\)</code>, where the function <code>\(f:[0,\infty)\to\mathbb{R}\)</code> mapping regressors <code>\(x_i\ge0\)</code> to outcomes <code>\(y_i=f(x_i)\)</code> is the realization of a <a href="https://en.wikipedia.org/wiki/Wiener_process">Wiener process</a>.
I use these data to estimate some value <code>\(f(x)\)</code> via <a href="https://en.wikipedia.org/wiki/Bayesian_linear_regression">Bayesian regression</a>.
My estimate depends on the inverse of the covariance matrix for the outcome vector <code>\(y=(y_1,y_2,\ldots,y_n)\)</code>.
This matrix has <code>\({ij}^\text{th}\)</code> entry <code>\(\min\{x_i,x_j\}\)</code>, so I can compute its inverse using the expression above.</p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p><a href="https://mathoverflow.net/questions/453045/is-there-a-name-for-this-family-of-matrices">Let me know</a> if the family of such matrices has a name! <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Correlation and concordance
https://bldavies.com/blog/correlation-concordance/
Thu, 03 Aug 2023 00:00:00 +0000https://bldavies.com/blog/correlation-concordance/<p>Let <code>\(X=(X_1,X_2)\)</code> be a random vector in <code>\(\mathbb{R}^2\)</code>.
Two realizations <code>\(x\)</code> and <code>\(x'\)</code> of <code>\(X\)</code> form a <a href="https://en.wikipedia.org/wiki/Concordant_pair">concordant pair</a> if <code>\((x_2'-x_2)\)</code> and <code>\((x_1'-x_1)\)</code> have the same sign.
What’s the probability of sampling a concordant pair when <code>\(X\)</code> is bivariate normal?</p>
<p>For example, suppose <code>\(X_1\)</code> and <code>\(X_2\)</code> have zero means, unit variances, and a correlation of <code>\(\rho\)</code>.
The scatter plots below show 100 realizations of <code>\((X_1,X_2)\)</code> when <code>\(\rho\in\{-0.5,0,0.5\}\)</code>.
These realizations contain
<code>$$\binom{100}{2}=4,\!950$$</code>
pairs, of which 36% are concordant when <code>\(\rho=-0.5\)</code>.
This percentage rises to 48% when <code>\(\rho=0\)</code> and to 71% when <code>\(\rho=0.5\)</code>.
Increasing <code>\(\rho\)</code> makes concordance more likely because it makes <code>\((X_2-X_1)\)</code> larger and less noisy.</p>
<p><img src="figures/scatter-1.svg" alt=""></p>
<p>Different samples give different concordance rates due to sampling variation.
We can remove this variation by deriving the concordance rate analytically.
To begin, suppose <code>\(X\)</code> has mean <code>\(\mathrm{E}[X]=(\mu_1,\mu_2)\)</code> and covariance matrix
<code>$$\mathrm{Var}(X)=\begin{bmatrix} \sigma_1^2 & \rho\sigma_1\sigma_2 \\ \rho\sigma_1\sigma_2 & \sigma_2^2 \end{bmatrix}.$$</code>
Then <code>\(X_2\mid X_1\)</code> is normal with mean
<code>$$\mathrm{E}[X_2\mid X_1]=\mu_2+\frac{\rho\sigma_2}{\sigma_1}(X_1-\mu_1)$$</code>
and variance
<code>$$\mathrm{Var}(X_2\mid X_1)=(1-\rho^2)\sigma_2^2.$$</code>
So for any two realizations <code>\(x\)</code> and <code>\(x'\)</code> of <code>\(X\)</code> we can write
<code>$$\renewcommand{\epsilon}{\varepsilon} x'_2-x_2=\frac{\rho\sigma_2}{\sigma_1}\left(x'_1-x_1\right)+\epsilon$$</code>
with <code>\(\epsilon\sim N(0,2(1-\rho^2)\sigma_2^2)\)</code>.
Now <code>\(x'_1-x_1\sim N(0,2\sigma_1^2)\)</code> is normal, and so
<code>$$z\equiv \frac{x'_1-x_1}{\sigma_1\sqrt{2}}$$</code>
is standard normal and exceeds zero if and only if <code>\(x'_1>x_1\)</code>.
Letting <code>\(f\)</code> and <code>\(\phi\)</code> be the density functions for <code>\(\epsilon\)</code> and <code>\(z\)</code> then gives
<code>$$\newcommand{\der}{\mathrm{d}} \begin{align} \Pr(x'_2>x_2\ \text{and}\ x'_1>x_1) &= \Pr(\sqrt{2}\rho\sigma_2 z+\epsilon>0\ \text{and}\ z>0) \\ &= \int_0^\infty\left(\int_{-\sqrt{2}\rho\sigma_2 z}^\infty f(\epsilon)\,\der \epsilon\right)\phi(z)\,\der z \\ &\overset{\star}{=} \int_0^\infty\left(\int_{\frac{-\rho z}{\sqrt{1-\rho^2}}}^\infty \phi(w)\,\der w\right)\phi(z)\,\der z \\ &= \int_0^\infty\left(1-\Phi\left(\frac{-\rho z}{\sqrt{1-\rho^2}}\right)\right)\phi(z)\,\der z \\ &\overset{\star\star}{=} \frac{1}{2}-\int_0^\infty\Phi\left(\frac{-\rho z}{\sqrt{1-\rho^2}}\right)\phi(z)\,\der z, \end{align}$$</code>
where <code>\(\Phi\)</code> is the standard normal CDF, where <code>\(\star\)</code> uses the change of variables
<code>$$w\equiv \frac{\epsilon}{\sigma_2\sqrt{2(1-\rho^2)}},$$</code>
and where <code>\(\star\star\)</code> uses the symmetry of <code>\(\phi\)</code> about <code>\(z=0\)</code>.
But <code>\(f\)</code> is symmetric about <code>\(\epsilon=0\)</code>, which implies
<code>$$\Pr(x'_2>x_2\ \text{and}\ x'_1>x_1)=\Pr(x'_2<x_1\ \text{and}\ x'_1<x_1),$$</code>
and therefore
<code>$$\begin{align} C(\rho) &\equiv \Pr(x\ \text{and}\ x'\ \text{are concordant}) \\ &= \Pr(x'_2>x_2\ \text{and}\ x'_1>x_1)+\Pr(x'_2<x_1\ \text{and}\ x'_1<x_1) \\ &= 1-2\int_0^\infty\Phi\left(\frac{-\rho z}{\sqrt{1-\rho^2}}\right)\phi(z)\,\der z. \end{align}$$</code>
The concordance rate <code>\(C(\rho)\)</code> depends on the correlation <code>\(\rho\)</code> of <code>\(X_1\)</code> and <code>\(X_2\)</code>, but not their means or variances.
It has value <code>\(C(0)=0.5\)</code> when <code>\(\rho=0\)</code> because <code>\(\Phi(0)=0.5\)</code> is constant.
Intuitively, if <code>\(X_1\)</code> and <code>\(X_2\)</code> are uncorrelated then we can’t use <code>\((x'_1-x_1)\)</code> to predict <code>\((x'_2-x_2)\)</code>, which is equally likely to be positive or negative.
Whereas if <code>\(\lvert\rho\rvert=1\)</code> then <code>\((x'_1-x_1)\)</code> predicts <code>\((x'_2-x_2)\)</code> perfectly, and so
<code>$$\lim_{\rho\to1}C(\rho)=1$$</code>
and
<code>$$\lim_{\rho\to-1}C(\rho)=0.$$</code>
The chart below verifies that the concordance rate <code>\(C(\rho)\)</code> grows with <code>\(\rho\)</code>.
It also shows that
<code>$$C(\rho)+C(1-\rho)=1.$$</code>
Thus, for example, we have <code>\(C(-0.5)=1/3\)</code> and <code>\(C(0.5)=2/3\)</code>.
These values remove the sampling error from the estimates 0.36 and 0.71 obtained using the 100 realizations above.</p>
<p><img src="figures/integral-1.svg" alt=""></p>
The option value of waiting
https://bldavies.com/blog/option-value-waiting/
Sun, 16 Jul 2023 00:00:00 +0000https://bldavies.com/blog/option-value-waiting/<p>This post is about waiting for information before taking an action.
It uses a simple model to explain when and why waiting is valuable.
It formalizes some ideas discussed in my posts on <a href="https://bldavies.com/blog/climate-change-transport-planning/">climate change</a> and <a href="https://bldavies.com/blog/policymaking-under-uncertainty/">pandemic</a> policy.</p>
<p>Suppose it costs <code>\(c>0\)</code> to take an action that pays <code>\(b>c\)</code> if it is beneficial (<code>\(\omega=1\)</code>) and zero otherwise (<code>\(\omega=0\)</code>).
I take the action if its expected net benefit
<code>$$\newcommand{\E}{\mathrm{E}} \E[\omega b-c]=pb-c$$</code>
exceeds zero, where <code>\(p=\Pr(\omega=1)\)</code> is my prior belief about <code>\(\omega\)</code>.
Thus, my decision rule is to take the action whenever <code>\(p\)</code> exceeds the cost-benefit ratio <code>\(c/b\)</code>.</p>
<p>Now suppose I can wait for a <a href="https://bldavies.com/blog/learning-noisy-signals/">noisy signal</a> <code>\(s\in\{0,1\}\)</code> with error rate
<code>$$\renewcommand{\epsilon}{\varepsilon} \Pr(s\not=\omega\mid \omega)=\epsilon\in[0,0.5].$$</code>
I use my prior, the signal, and Bayes’ rule to form a posterior belief
<code>$$\begin{align} q_s &\equiv \Pr(\omega=1\mid s) \\ &= \begin{cases} \frac{\epsilon p}{(1-\epsilon)(1-p)+\epsilon p} & \text{if}\ s=0 \\ \frac{(1-\epsilon)p}{\epsilon(1-p)+(1-\epsilon)p} & \text{if}\ s=1 \end{cases} \end{align}$$</code>
about <code>\(\omega\)</code>.
Then I take the action if its expected net benefit
<code>$$\begin{align} \E[\omega b-c\mid s] &= q_sb-c \end{align}$$</code>
given <code>\(s\)</code> exceeds zero.
This happens with probability
<code>$$\Pr(q_sb-c\ge0)=\begin{cases} 1 & \text{if}\ c/b\le q_0 \\ \Pr(s=1) & \text{if}\ q_0<c/b\le q_1 \\ 0 & \text{if}\ q_1<c/b, \end{cases}$$</code>
where the probability
<code>$$\Pr(s=1)=\epsilon(1-p)+(1-\epsilon)p$$</code>
of receiving a positive signal depends on my prior <code>\(p\)</code> and the error rate <code>\(\epsilon\)</code>.</p>
<p>If <code>\(c/b\le q_0\)</code> or <code>\(q_1<c/b\)</code> then the signal doesn’t affect whether I take the action, so I don’t need to wait.
But if <code>\(q_0<c/b\le q_1\)</code> then waiting gives me a <a href="https://en.wikipedia.org/wiki/Real_options_valuation">real option</a> not to take the action if I learn it isn’t beneficial.
So the expected benefit of waiting equals
<code>$$\begin{align} W &\equiv \delta\,\E\left[\E[\max\{0,q_sb-c\}\mid s]\right] \\ &= \begin{cases} \delta(pb-c) & \text{if}\ c/b\le q_0 \\ \delta(q_1b-c)\Pr(s=1) & \text{if}\ q_0<c/b\le q_1 \\ 0 & \text{if}\ q_1<c/b, \end{cases} \end{align}$$</code>
where the discount factor <code>\(\delta\in[0,1]\)</code> captures
(i) my patience and
(ii) my confidence that the action will still be available if I wait.</p>
<p>I should take the action <em>before</em> receiving <code>\(s\)</code> if and only if the expected net benefit <code>\((pb-c)\)</code> under my prior exceeds <code>\(W\)</code>.
This happens precisely when my prior exceeds
<code>$$p^*\equiv\frac{(1-\delta\epsilon)c}{b-\delta((1-\epsilon)b-(1-2\epsilon)c)}.$$</code>
The following chart plots <code>\(p^*\)</code> against <code>\(\delta\)</code> when <code>\(c/b\in\{0.1,0.3,0.5\}\)</code> and <code>\(\epsilon\in\{0,0.25,0.5\}\)</code>.
Increasing the discount factor <code>\(\delta\)</code> or the cost-benefit ratio <code>\(c/b\)</code> raises the option value of waiting, which raises the threshold prior <code>\(p^*\)</code> above which I should take the action.
Increasing the error rate <code>\(\epsilon\)</code> makes the signal less informative, which <em>lowers</em> the option value of waiting and, hence, lowers <code>\(p^*\)</code>.
If <code>\(\epsilon=0.5\)</code> then the signal is uninformative and so <code>\(p^*=c/b\)</code> independently of <code>\(\delta\)</code>.</p>
<p><img src="figures/thresholds-1.svg" alt=""></p>
Learning in continuous time
https://bldavies.com/blog/learning-continuous-time/
Sat, 08 Jul 2023 00:00:00 +0000https://bldavies.com/blog/learning-continuous-time/<p>This post describes a continuous-time model of Bayesian learning about a binary state.
It complements the discrete-time models discussed in previous posts (see, e.g., <a href="https://bldavies.com/blog/learning-noisy-signals/">here</a> or <a href="https://bldavies.com/blog/binary-signals-posterior-variances/">here</a>).
I present <a href="#model">the model</a>, discuss its <a href="#learning-dynamics">learning dynamics</a>, and <a href="#deriving-the-belief-increments">derive these dynamics</a> analytically.</p>
<p>The model has been used to study decision times (<a href="https://doi.org/10.1257/aer.20150742">Fudenberg et al., 2018</a>), experimentation (<a href="https://doi.org/10.1111/1468-0262.00022">Bolton and Harris, 1999</a>; <a href="https://doi.org/10.1111/1468-0262.00259">Moscarini and Smith, 2001</a>), information acquisition (<a href="https://doi.org/10.2139/ssrn.2991567">Morris and Strack, 2019</a>), and persuasion (<a href="https://doi.org/10.1016/j.jmateco.2021.102534">Liao, 2021</a>).
It also underlies the <a href="https://en.wikipedia.org/wiki/Two-alternative_forced_choice#Drift-diffusion_model">drift-diffusion model</a> of reaction times used by psychologists—see <a href="https://doi.org/10.1037/0033-295X.85.2.59">Ratcliff (1978)</a> for an early example, and <a href="https://doi.org/10.1016/j.jet.2023.105612">Hébert and Woodford (2023)</a> or <a href="https://doi.org/10.1006/jmps.1999.1260">Smith (2000)</a> for related discussions.</p>
<h2 id="model">Model</h2>
<p>Suppose I want to learn about a state <code>\(\mu\)</code> that may be high (equal to <code>\(H\)</code>) or low (equal to <code>\(L<H\)</code>).
I observe a continuous sample path <code>\((X_t)_{t\ge0}\)</code> with random, instantaneous increments
<code>$$\DeclareMathOperator{\E}{E} \newcommand{\der}{\mathrm{d}} \newcommand{\R}{\mathbb{R}} \der X_t=\mu\der t+\sigma \der W_t,$$</code>
where <code>\(\sigma>0\)</code> amplifies the noise generated by the standard <a href="https://en.wikipedia.org/wiki/Wiener_process">Wiener process</a> <code>\((W_t)_{t\ge0}\)</code>.
These increments provide <a href="https://bldavies.com/blog/learning-noisy-signals/">noisy signals</a> of the state <code>\(\mu\)</code>.
I use these signals, my prior belief <code>\(p_0=\Pr(\mu=H)\)</code>, and Bayes’ rule to form a posterior belief
<code>$$p_t\equiv \Pr\left(\mu=H\mid (X_s)_{s<t}\right)$$</code>
about <code>\(\mu\)</code> given the sample path observed up to time <code>\(t\)</code>.
As <a href="#deriving-the-belief-increments">shown below</a>, this posterior belief has increments
<code>$$\der p_t=p_t(1-p_t)\frac{(H-L)}{\sigma}\der Z_t,$$</code>
where <code>\((Z_t)_{t\ge0}\)</code> is a Wiener process with respect to my information at time <code>\(t\)</code>.
Its increments
<code>$$\der Z_t=\frac{1}{\sigma}\left(\der X_t-\hat\mu_t\der t\right)$$</code>
exceed zero precisely when the corresponding increments <code>\(\der X_t\)</code> in the sample path exceed my posterior estimates
<code>$$\begin{align} \hat\mu_t &\equiv \E\left[\mu\mid (X_s)_{s<t}\right] \\ &= p_tH+(1-p_t)L. \end{align}$$</code></p>
<h2 id="learning-dynamics">Learning dynamics</h2>
<p>My belief increments <code>\(\der p_t\)</code> get smaller as <code>\(p_t\)</code> approaches zero or one.
The ratio <code>\((H-L)/\sigma\)</code> controls how quickly this happens.
Intuitively, if <code>\((H-L)\)</code> is large then the high and low states are easy to tell apart from the trends in <code>\((X_t)_{t\ge0}\)</code> they imply.
But if <code>\(\sigma\)</code> is large then these trends are blurred by the random fluctuations <code>\(\sigma\der W_t\)</code>.</p>
<p>I illustrate these dynamics in the chart below.
It shows the sample paths <code>\((X_t)_{t\ge0}\)</code> and corresponding beliefs <code>\((p_t)_{t\ge0}\)</code> when <code>\((H,L,\mu,p_0)=(1,0,H,0.5)\)</code> and <code>\(\sigma\in\{1,2\}\)</code>.
I use the same realization of the underlying Wiener process <code>\((W_t)_{t\ge0}\)</code> for each value of <code>\(\sigma\)</code>.
Increasing this value slows my convergence to the correct belief <code>\(p_t=1\)</code> because it makes the signals <code>\(\der X_t\)</code> less informative about <code>\(\mu=H\)</code>.</p>
<p><img src="figures/example-1.svg" alt=""></p>
<h2 id="deriving-the-belief-increments">Deriving the belief increments</h2>
<p>The increments <code>\(\der W_t\)</code> of the Wiener process <code>\((W_t)_{t\ge0}\)</code> are iid normally distributed with mean zero and variance <code>\(\der t\)</code>:
<code>$$\der W_t\sim N(0,\der t).$$</code>
Thus, given <code>\(\mu\)</code>, the increments <code>\(\der X_t\)</code> of the sample path <code>\((X_t)_{t\ge0}\)</code> are iid normal with mean <code>\(\mu\der t\)</code> and variance <code>\(\sigma^2\der t\)</code>:
<code>$$\der X_t\mid\mu\sim N(\mu\der t,\sigma^2\der t).$$</code>
So these increments have conditional PDF
<code>$$\begin{align} f_\mu(\der X_t) &= \frac{1}{\sigma\sqrt{2\pi\der t}}\exp\left(-\frac{(\der X_t-\mu\der t)^2}{2\sigma^2\der t}\right) \\ &= \frac{1}{\sigma\sqrt{2\pi\der t}}\exp\left(-\frac{(\der X_t)^2}{2\sigma^2\der t}\right)\exp\left(\frac{\mu\der X_t}{\sigma^2}-\frac{\mu^2\der t}{2\sigma^2}\right). \end{align}$$</code>
But the rules of <a href="https://en.wikipedia.org/wiki/It%C3%B4_calculus">Itô calculus</a> imply <code>\((\der X_t)^2=\sigma^2\der t\)</code> and
<code>$$\begin{align} \exp\left(\frac{\der X_t\mu}{\sigma^2}-\frac{\mu^2\der t}{2\sigma^2}\right) &= \sum_{k\ge0}\frac{1}{k!}\left(\frac{\mu\der X_t}{\sigma^2}-\frac{\mu^2\der t}{2\sigma^2}\right)^k \\ &= 1+\frac{\mu\der X_t}{\sigma^2} \end{align}$$</code>
because these rules treat terms of order <code>\((\der t)^2\)</code> or smaller as equal to zero.
Thus
<code>$$f_\mu(\der X_t)=\frac{1}{\sigma^3\sqrt{2\pi\der t}}\exp\left(-\frac{1}{2}\right)\left(\mu\der X_t+\sigma^2\right)$$</code>
for each <code>\(\mu\in\{H,L\}\)</code>.
Applying <a href="https://en.wikipedia.org/wiki/Bayes%27_theorem">Bayes’ rule</a> then gives
<code>$$\begin{align} p_{t+\der t} &= \frac{p_tf_H(\der X_t)}{p_tf_H(\der X_t)+(1-p_t)f_L(\der X_t)} \\ &= \frac{p_t\left(H\der X_t+\sigma^2\right)}{\hat\mu_t\der X_t+\sigma^2}, \end{align}$$</code>
where <code>\(\hat\mu_t=\E[\mu\mid (X_s)_{s<t}]\)</code> is my posterior estimate of <code>\(\mu\)</code>.
So the belief process <code>\((p_t)_{t\ge0}\)</code> has increments
<code>$$\begin{align} \der p_t &\equiv p_{t+\der t}-p_t \\ &= \frac{p_t(1-p_t)(H-L)\der X_t}{\hat\mu_t\der X_t+\sigma^2}. \end{align}$$</code>
Finally, taking a Maclaurin series expansion and applying the rules of Itô calculus gives
<code>$$\begin{align} \frac{\der X_t}{\hat\mu_t\der X_t+\sigma^2} &= \der X_t\sum_{k\ge0}\frac{(-1)^kk!}{(\sigma^2)^{k+1}}(\der X_t)^k \\ &= \der X_t\left(\frac{1}{\sigma^2}-\frac{1}{\sigma^4}\der X_t\right) \\ &= \frac{1}{\sigma^2}\left(\der X_t-\hat\mu_t\der t\right), \end{align}$$</code>
from which we obtain the expressions for <code>\(\der p_t\)</code> and <code>\(\der Z_t\)</code> provided above.</p>
Paying for precision
https://bldavies.com/blog/paying-precision/
Tue, 04 Jul 2023 00:00:00 +0000https://bldavies.com/blog/paying-precision/<p>Suppose my payoff <code>\(u(a,\mu)\equiv-(a-\mu)^2\)</code> from taking an action <code>\(a\in\mathbb{R}\)</code> depends on an unknown state <code>\(\mu\in\mathbb{R}\)</code>.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>
I can learn about <code>\(\mu\)</code> by collecting data <code>\(X=\{x_1,x_2,\ldots,x_n\}\)</code>, where the observations <code>\(x_i\)</code> are iid normally distributed with mean <code>\(\mu\)</code> and variance <code>\(\sigma^2\)</code>:<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>
<code>$$x_i\mid \mu\sim N(\mu,\sigma^2).$$</code>
I use these data, my prior belief
<code>$$\mu\sim N(\mu_0,\sigma_0^2),$$</code>
and Bayes’ rule to form a posterior belief
<code>$$\mu\mid X\sim N\left(\frac{\tau_0}{\tau_0+n\tau}\mu_0+\frac{n\tau}{\tau_0+n\tau}\bar{x},\frac{1}{\tau_0+n\tau}\right),$$</code>
where <code>\(\tau_0\equiv1/\sigma_0^2\)</code> is the precision of my prior, <code>\(\tau\equiv1/\sigma^2\)</code> is the precision of the <code>\(x_i\)</code>, and
<code>$$\bar{x}\equiv\frac{1}{n}\sum_{i=1}^nx_i$$</code>
is their arithmetic mean.
Then my expected payoff from taking action <code>\(a\)</code> equals
<code>$$\DeclareMathOperator{\E}{E} \DeclareMathOperator{\Var}{Var} \E[u(a,\mu)\mid X]=-(a-\E[\mu\mid X])^2-\Var(\mu\mid X).$$</code>
I maximize this payoff by choosing <code>\(a^*\equiv\E[\mu\mid X]\)</code>.
This yields expected payoff
<code>$$\E[u(a^*,\mu)\mid X_n]=-\frac{1}{\tau_0+n\tau},$$</code>
which is increasing in <code>\(n\)</code>.
Intuitively, collecting more data makes me more informed and makes my optimal action more likely to be “correct.”
But data are costly: I have to pay <code>\(\kappa n\tau\)</code> to collect <code>\(n\)</code> observations, where <code>\(\kappa>0\)</code> captures the marginal cost of information.<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>
I choose <code>\(n\)</code> to maximize my total payoff
<code>$$\begin{align*} U(n) &\equiv \E[u(a^*,\mu)\mid X]-\kappa n\tau, \end{align*}$$</code>
which has maximizer
<code>$$n^*=\max\left\{0,\frac{1}{\tau}\left(\frac{1}{\sqrt\kappa}-\tau_0\right)\right\}.$$</code>
If <code>\(1\le\sqrt\kappa\tau_0\)</code> then <code>\(n^*=0\)</code> because the cost of collecting <em>any</em> data isn’t worth the variance reduction they deliver.
Whereas if <code>\(1>\sqrt\kappa\tau_0\)</code> then <code>\(n^*\)</code> is strictly positive and gives me total payoff
<code>$$U(n^*)=-2\sqrt\kappa+\kappa\tau_0.$$</code>
Both <code>\(n^*\)</code> and <code>\(U(n^*)\)</code> are decreasing in <code>\(\kappa\)</code>.
Intuitively, making the data more expensive makes me want to collect less, leaving me less informed and worse off.
In contrast, making my prior more precise (i.e., increasing <code>\(\tau_0\)</code>) makes me want to collect less data but leaves me <em>better</em> off.
This is because being well-informed means I can pay for less data and still be well-informed.</p>
<p>Curiously, making the <code>\(x_i\)</code> more precise (i.e., increasing <code>\(\tau\)</code>) makes me want to collect more data but does not change my welfare.
This is because the cost <code>\(\kappa\tau\)</code> of each observation <code>\(x_i\)</code> scales with its precision.
This cost exactly offsets the value of the information gained, leaving my total payoff <code>\(U(n^*)\)</code> unchanged.</p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>See <a href="https://bldavies.com/blog/paying-truth/">here</a> for my discussion of the case when the state and data are binary. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>This is the same as letting <code>\(x_i=\mu+\varepsilon_i\)</code> with iid errors <code>\(\varepsilon_i\sim N(0,\sigma^2)\)</code>. <a href="#fnref:2" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:3" role="doc-endnote">
<p><a href="https://doi.org/10.1257/aer.20190185">Pomatto et al. (2023)</a> show that this cost function (uniquely) satisfies some attractive properties.
Linear cost functions also appear in many sequential sampling problems (see, e.g., <a href="https://doi.org/10.1214/aoms/1177731118">Wald’s (1945)</a> classic model or <a href="https://dx.doi.org/10.2139/ssrn.2991567">Morris and Strack’s (2019)</a> discussion of it) and their continuous-time analogues (see, e.g., <a href="https://doi.org/10.1257/aer.20150742">Fudenberg et al. (2018)</a> or <a href="https://doi.org/10.3982/ECTA18324">Liang et al. (2022)</a>). <a href="#fnref:3" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Binary signals and posterior variances
https://bldavies.com/blog/binary-signals-posterior-variances/
Sun, 02 Jul 2023 00:00:00 +0000https://bldavies.com/blog/binary-signals-posterior-variances/<p>Suppose I receive a <a href="https://bldavies.com/blog/learning-noisy-signals/">noisy signal</a> <code>\(s\in\{0,1\}\)</code> about an unknown state <code>\(\omega\in\{0,1\}\)</code>.
The signal has false positive rate
<code>$$\renewcommand{\epsilon}{\varepsilon} \Pr(s=1\mid\omega=0)=\alpha$$</code>
and false negative rate
<code>$$\Pr(s=0\mid\omega=1)=\beta$$</code>
with <code>\(\alpha,\beta\in[0,0.5]\)</code>.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>
I use these rates, my prior belief <code>\(p=\Pr(\omega=1)\)</code>, and <a href="https://en.wikipedia.org/wiki/Bayes%27_theorem">Bayes’ rule</a> to form a posterior belief
<code>$$\begin{align} q_s &\equiv \Pr(\omega=1\mid s) \\ &= \frac{\Pr(s\mid\omega=1)\Pr(\omega=1)}{\Pr(s)} \\ &= \begin{cases} \frac{\beta p}{(1-\alpha)(1-p)+\beta p} & \text{if}\ s=0 \\ \frac{(1-\beta)p}{\alpha(1-p)+(1-\beta)p} & \text{if}\ s=1 \end{cases} \end{align}$$</code>
that depends on the signal I receive.</p>
<p>Now suppose I take an action <code>\(a\in[0,1]\)</code> with cost <code>\(c(a,\omega)\equiv(a-\omega)^2\)</code>.
I want to minimize my expected cost
<code>$$\DeclareMathOperator{\E}{E} \begin{align} \E[c(a,\omega)\mid s] &= (1-q_s)c(a,0)+q_sc(a,1) \\ &= (1-q_s)a^2+q_s(a-1)^2 \end{align}$$</code>
given <code>\(s\)</code>, which leads me to choose <code>\(a=q_s\)</code>.
Then my minimized expected cost
<code>$$\begin{align} \E[c(q_s,\omega)\mid s] &= q_s(1-q_s) \\ &= p(1-p)\times\begin{cases} \frac{(1-\alpha)\beta}{\left((1-\alpha)(1-p)+\beta p\right)^2} & \text{if}\ s=0 \\ \frac{\alpha(1-\beta)}{\left(\alpha(1-p)+(1-\beta)p\right)^2} & \text{if}\ s=1 \end{cases} \end{align}$$</code>
equals the posterior variance in my belief about <code>\(\omega\)</code> after receiving <code>\(s\)</code>.
The expected value of this variance <em>before</em> receiving <code>\(s\)</code> equals
<code>$$\begin{align} V(p,\alpha,\beta) &\equiv q_0(1-q_0)\Pr(s=0)+q_1(1-q_1)\Pr(s=1) \\ &= p(1-p)\times\frac{\alpha(1-\alpha)(1-p)+\beta(1-\beta)p}{\left((1-\alpha)(1-p)+\beta p\right)\left(\alpha(1-p)+(1-\beta)p\right)}, \end{align}$$</code>
which depends on my prior <code>\(p\)</code> as well as the error rates <code>\(\alpha\)</code> and <code>\(\beta\)</code>.
For example, the chart below plots
<code>$$V(p,\epsilon,\epsilon)=p(1-p)\times\frac{\epsilon(1-\epsilon)}{p(1-p)+\epsilon(1-\epsilon)(1-2p)^2}$$</code>
against <code>\(\epsilon\)</code> when <code>\(p\in\{0.5,0.7,0.9\}\)</code>.
If <code>\(\epsilon=0\)</code> then the signal is fully informative because it always matches the state <code>\(\omega\)</code>.
Larger values of <code>\(\epsilon\le0.5\)</code> lead to less precise posterior beliefs.
Indeed if <code>\(\epsilon=0.5\)</code> then the signal is uninformative because <code>\(\Pr(s=1)=0.5\)</code> (and, hence, <code>\(q_0=q_1=p\)</code>) independently of <code>\(\omega\)</code>.
The slope <code>\(\partial V(p,\epsilon,\epsilon)/\partial\epsilon\)</code> falls as my prior <code>\(p\)</code> moves away from <code>\(0.5\)</code> because having a more precise prior makes my beliefs less sensitive to the signal.</p>
<p><img src="figures/convexity-1.svg" alt=""></p>
<p>The next chart shows the contours of <code>\(V(p,\alpha,\beta)\)</code> in the <code>\(\alpha\beta\)</code>-plane.
These contours are symmetric across the diagonal line <code>\(\alpha=\beta\)</code> when my prior <code>\(p\)</code> equals <code>\(0.5\)</code> but asymmetric when <code>\(p\not=0.5\)</code>.
Intuitively, if I have a strong prior that <code>\(\omega=1\)</code> then positive signals <code>\(s=1\)</code> are less surprising, and shift my belief less, than negative signals <code>\(s=0\)</code>.
So if <code>\(p>0.5\)</code> then I need to increase the false positive rate <code>\(\alpha\)</code> by more than I decrease the false negative rate <code>\(\beta\)</code> to keep <code>\(V(p,\alpha,\beta)\)</code> constant.</p>
<p><img src="figures/contours-1.svg" alt=""></p>
<p>One consequence of this asymmetry is that the constrained minimization problem
<code>$$\min_{\alpha,\beta}V(p,\alpha,\beta)\ \text{subject to}\ 0\le\alpha,\beta\le0.5\ \text{and}\ \alpha+\beta\ge B$$</code>
has a corner solution
<code>$$(\alpha^*,\beta^*)=\begin{cases} (0,B) & \text{if}\ p\le1/2 \\ (B,0) & \text{if}\ p>1/2 \end{cases}$$</code>
for all lower bounds <code>\(B\in[0,0.5]\)</code> on the sum of the error rates.
Intuitively, if I can limit my exposure to false positives and negatives then I should prevent whichever occur in the state that’s most likely under my prior.
For example, if <code>\(p>0.5\)</code> then I’m best off allowing some false positives but preventing any false negatives.
This makes negative signals fully informative because they only occur when <code>\(\omega=0\)</code>.</p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>There is no loss in generality from assuming <code>\(\alpha,\beta\le0.5\)</code> because observing <code>\(s\)</code> is the same as observing <code>\((1-s)\)</code>. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Comparing equal- and value-weighted portfolios
https://bldavies.com/blog/comparing-equal-value-weighted-portfolios/
Mon, 22 May 2023 00:00:00 +0000https://bldavies.com/blog/comparing-equal-value-weighted-portfolios/<p>Imagine two portfolios of S&P 500 companies.
One portfolio weights all companies equally; the other weights companies by their market capitalization (hereafter “value”).
Which portfolio is the better investment?</p>
<p>One way to answer this question is to look at historical data.
For example, the Center for Research in Security Prices (CRSP) <a href="https://www.crsp.org/products/documentation/crsp-indexes-sp-500%C2%AE-universe-0">provides</a> monthly returns on each portfolio between January 1926 and December 2022.
I summarize these returns in the table below.
They had overall means of 1.13% and 0.94%, standard deviations of 6.72% and 5.42%, and a Pearson correlation of 0.96.</p>
<table>
<thead>
<tr>
<th align="left">Portfolio</th>
<th align="right">Mean</th>
<th align="right">Std. dev.</th>
<th align="right">Min</th>
<th align="right">Median</th>
<th align="right">Max</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Equal-weighted</td>
<td align="right">1.13</td>
<td align="right">6.72</td>
<td align="right">-31.00</td>
<td align="right">1.36</td>
<td align="right">68.04</td>
</tr>
<tr>
<td align="left">Value-weighted</td>
<td align="right">0.94</td>
<td align="right">5.42</td>
<td align="right">-28.75</td>
<td align="right">1.30</td>
<td align="right">41.43</td>
</tr>
</tbody>
</table>
<p>Suppose past and future returns have the same distribution.
Then I expect the returns on the equal-weighted portfolio to be larger but riskier.
So my preference over portfolios depends on my risk tolerance.
I demonstrate this dependence in the chart below.
It shows the certainty-equivalent (CE) return on each portfolio for a range of <a href="https://en.wikipedia.org/wiki/Risk_aversion#Relative_risk_aversion">relative risk aversion</a> (RRA) coefficients.
The CE return equals the mean return when my RRA coefficient equals zero.
It falls when my RRA coefficient rises because I demand a larger risk premium.
The rate at which the CE return falls depends on portfolio’s return distribution.
Based on the distributions summarized above, I prefer the equal-weighted portfolio whenever my RRA coefficient is less than 2.76.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup></p>
<p><img src="figures/certainty-equivalents-1.svg" alt=""></p>
<p>Another way to compare the two portfolios is to look at their long-term growth rates.
I do that in the chart below.
It shows the capital gain I would have realized if I bought each portfolio in the past, reinvested my dividends, and sold my holdings at the end of 2022.<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>
I make these gains comparable across holding periods by presenting them as mean monthly returns.
For example, investing in the equal-weighted portfolio in December 2002 would have led to the same capital gain as investing in an asset that returned 0.90% every month for the next 20 years.</p>
<p><img src="figures/gains-1.svg" alt=""></p>
<p>If I invested in either portfolio before September 2010, then I would have earned more on the equal-weighted portfolio.
Its dominance over the value-weighted portfolio peaked in early 2000, when the <a href="https://en.wikipedia.org/wiki/Dot-com_bubble">dot-com crash</a> saw lots of large companies lose lots of value.</p>
<p>Of course, past and future returns can differ.
The equal-weighted portfolio may have been the better investment 20 years ago but could be a worse investment today.
So what does the theory say?</p>
<p><a href="https://doi.org/10.1057/s41260-016-0033-4">Malladi and Fabozzi (2017)</a> argue that the equal-weighted portfolio offers higher returns because it is regularly <a href="https://en.wikipedia.org/wiki/Rebalancing_investments">rebalanced</a>.
For example, if I start with equal shares in two companies, but one doubles in value and the other halves, then my portfolio will end with a 80/20 split.
So if I want to maintain equal weights then I need to sell companies that grow a lot and buy companies that don’t.
This <a href="https://en.wikipedia.org/wiki/Contrarian_investing">contrarian</a> strategy takes advantage of <a href="https://en.wikipedia.org/wiki/Mean_reversion_(finance)">mean reversion</a>.
Indeed <a href="https://doi.org/10.1007/978-3-030-66691-0_9">Plyakha et al (2021)</a> argue that maintaining <em>unequal</em> weights would also lead to higher mean returns.
These arguments agree with empirical evidence that few, if any, investing strategies consistently outperform weighting stocks equally (e.g., <a href="https://doi.org/10.1093/rfs/hhm075">DeMiguel et al, 2009</a>; <a href="https://doi.org/10.1016/j.jbankfin.2018.09.021">Hsu et al, 2018</a>; <a href="https://doi.org/10.1007/s11156-021-01008-w">Qin and Singal, 2022</a>).</p>
<hr>
<p><em>Thanks to <a href="https://profiles.stanford.edu/john-shoven">John Shoven</a> for inspiring this post.</em></p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>For reference, most macro/finance research uses coefficients between one and three. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>I focus on investments made before January 2020 to suppress the noise from (i) the COVID-19 pandemic and (ii) having few observations with which to compute means. <a href="#fnref:2" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Models of the AI apocalypse
https://bldavies.com/blog/models-ai-apocalypse/
Mon, 15 May 2023 00:00:00 +0000https://bldavies.com/blog/models-ai-apocalypse/<p>In <a href="https://www.econtalk.org/tyler-cowen-on-the-risks-and-impact-of-artificial-intelligence/">this week’s episode of <em>EconTalk</em></a>, Tyler Cowen asks:</p>
<blockquote>
<p>“Is there any actual mathematical model of this process of how the world is supposed to end?
…
If you look, say, at COVID or climate change fears, in both cases, there are many models you can look at.
…
I’m not saying you have to like those models.
But the point is: there’s something you look at and then you make up your mind whether or not you like those models; and then they’re tested against data.
So, when it comes to AGI and existential risk, it turns out as best I can ascertain, in the 20 years or so we’ve been talking about this seriously, there isn’t a single model done.”</p>
</blockquote>
<p>He goes on:</p>
<blockquote>
<p>“I don’t think any idea should be dismissed.
I’ve just been inviting [AI doomsayers] to actually join the discourse of science.
‘Show us your models.
Let us see their assumptions and let’s talk about those.’
The practice, instead, is to write these very long pieces online, which just stack arguments vertically and raise the level of anxiety.
…
Their mental model is so much: ‘We’re the insiders, we’re the experts.’
…
My mental model is: There’s a thing, science.
Try to publish this stuff in journals.
Try to model it.”</p>
</blockquote>
<p><a href="https://bldavies.com/blog/judging-economic-models">Good models</a> don’t need to be complete descriptions of reality.
But they <em>do</em> need to be logically consistent.
Their purpose is to make explicit the assumptions and premises underlying our intuitions.
Then we can subject those intuitions to formal scrutiny.</p>
<p>For example, suppose I think people should do X.
I write down a model of the process by which they decide what to do.
My model comprises a set of assumptions that imply X.
Now I ask:
Are my assumptions reasonable?
Do I believe them?
If not, then either
(i) people shouldn’t do X or
(ii) they don’t make decisions according to the process I’ve written down.
Both cases teach me something: my intuition is wrong!</p>
<p>Tyler wants AI doomsayers to go on similar intellectual journeys.
He wants to know: exactly what assumptions do they make when they say humanity is doomed?
What are the logical foundations of that claim?
Only by exposing those foundations can we test and revise them.
That’s how science works.
We tell each other <em>how</em> we think so that we can debate <em>what</em> we think.
Models help us frame the debate.
Sure, <a href="https://en.wikipedia.org/wiki/All_models_are_wrong">all models are wrong</a>.
But you can’t beat a model by waffling!</p>
Who reads *Marginal Revolution*?
https://bldavies.com/blog/who-reads-marginal-revolution/
Mon, 08 May 2023 00:00:00 +0000https://bldavies.com/blog/who-reads-marginal-revolution/<p>Here’s a summary of my website’s traffic since the start of 2023:</p>
<p><img src="figures/daily-visitors-1.svg" alt=""></p>
<p>Notice the spike on April 9, when Tyler Cowen <a href="https://marginalrevolution.com/marginalrevolution/2023/04/sunday-assorted-links-413.html">linked</a> to <a href="https://bldavies.com/blog/marginal-revolution-metadata">my post of <em>Marginal Revolution</em> metadata</a>.
That post is now my second most-viewed ever (just behind my post on <a href="https://bldavies.com/blog/applying-economics-phd-programs">applying to economics PhD programs</a>).</p>
<p>Where in the world did those views come from?
Here’s a summary:</p>
<p><img src="figures/sources-1.svg" alt=""></p>
<p>Most visitors came from the US.
This makes sense: <em>Marginal Revolution</em> is run by American authors who tend to focus on American issues.
About a third of my US-based visitors came from California, New York, or Massachusetts.
Bigger states tended to bring more visitors, but the relationship was not perfect.
For example, Californians comprise about 11.7% of the US population but 15.2% of my visitors.
These percentages differ due to selection effects: <em>Marginal Revolution</em> caters to educated readers who share the authors’ interests.
Indeed, all my visitors saw the word “metadata” and thought “I want to know more.”
I doubt the typical American would react similarly!</p>
Loan repayments
https://bldavies.com/blog/loan-repayments/
Tue, 18 Apr 2023 00:00:00 +0000https://bldavies.com/blog/loan-repayments/<p>Suppose I take out a loan.
It gains interest at rate <code>\(r\)</code>, compounded continuously.
I repay the loan by making constant, continuous payments until time <code>\(T\)</code>.
How does the repaid share of my loan vary over time?
And how does it depend on <code>\(r\)</code> and <code>\(T\)</code>?</p>
<p>Let <code>\(P_0\)</code> be the initial value of my loan: the “principal.”
Then my continuous payments <code>\(C\)</code> must satisfy
<code>$$\begin{align} P_0 &= \int_0^TCe^{-r\tau}\,\mathrm{d}\tau \\ &= \frac{C}{r}\left(1-e^{-rT}\right) \end{align}$$</code>
and so the value of my remaining payments at time <code>\(t\in[0,T]\)</code> equals
<code>$$\begin{align} P_t &\equiv \int_t^TCe^{-r(\tau-t)}\,\mathrm{d}\tau \\ &= \frac{C}{r}\left(1-e^{-r(T-t)}\right) \\ &= P_0\left(\frac{e^{-rt}-e^{-rT}}{1-e^{-rT}}\right)e^{rt}. \end{align}$$</code>
If I don’t make any payments before time <code>\(t\)</code> then the principal grows to <code>\(P_0e^{rt}\)</code>.
Therefore, the value of my repayments up to time <code>\(t\)</code> equals the difference <code>\((P_0e^{rt}-P_t)\)</code>.</p>
<p>Now let <code>\(x\equiv t/T\in[0,1]\)</code> be share of payments I’ve made up to time <code>\(t\)</code>.
The chart below plots the corresponding share
<code>$$\frac{P_0e^{rt}-P_t}{P_0e^{rt}}\bigg\rvert_{t=xT}=\frac{1-e^{-xrT}}{1-e^{-rT}}$$</code>
of the loan that I’ve repaid.
This share grows with <code>\(x\)</code> at a decreasing rate.
Intuitively, my repayment “slows down” because the interest on the principal and payments grows larger than the payments themselves.
This slowing effect is stronger when the interest rate <code>\(r\)</code> is larger and time horizon <code>\(T\)</code> is longer.</p>
<p><img src="figures/plot-1.svg" alt=""></p>
*Marginal Revolution* metadata
https://bldavies.com/blog/marginal-revolution-metadata/
Fri, 07 Apr 2023 00:00:00 +0000https://bldavies.com/blog/marginal-revolution-metadata/<p>Today I released the R package <a href="https://github.com/bldavies/MRposts">MRposts</a>.
It contains data on <a href="https://marginalrevolution.com"><em>Marginal Revolution</em></a> blog posts: their <a href="#authors">authors</a>, <a href="#titles">titles</a>, <a href="#publication-times">publication times</a>, <a href="#categories">categories</a>, and <a href="#comments">comment counts</a>.
I describe these data below.
They cover all 34,189 posts published between August 2003 and March 2023.</p>
<h2 id="authors">Authors</h2>
<p><em>Marginal Revolution</em> is run by <a href="https://en.wikipedia.org/wiki/Tyler_Cowen">Tyler Cowen</a> and <a href="https://en.wikipedia.org/wiki/Alex_Tabarrok">Alex Tabarrok</a>.
They wrote 86% and 13% of the posts in MRposts.
The rest were written by several guest bloggers.
I count posts by author in the table below.</p>
<table>
<thead>
<tr>
<th align="left">Author</th>
<th align="right">Posts</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Tyler Cowen</td>
<td align="right">29,373</td>
</tr>
<tr>
<td align="left">Alex Tabarrok</td>
<td align="right">4,564</td>
</tr>
<tr>
<td align="left">Fabio Rojas</td>
<td align="right">63</td>
</tr>
<tr>
<td align="left">Justin Wolfers</td>
<td align="right">24</td>
</tr>
<tr>
<td align="left">Steven Landsburg</td>
<td align="right">19</td>
</tr>
<tr>
<td align="left">Robin Hanson</td>
<td align="right">17</td>
</tr>
<tr>
<td align="left">Tim Harford</td>
<td align="right">15</td>
</tr>
<tr>
<td align="left">Craig Newmark</td>
<td align="right">14</td>
</tr>
<tr>
<td align="left">Ed Lopez</td>
<td align="right">12</td>
</tr>
<tr>
<td align="left">Bryan Caplan</td>
<td align="right">11</td>
</tr>
<tr>
<td align="left">Eric Helland</td>
<td align="right">11</td>
</tr>
<tr>
<td align="left">Angus Grier</td>
<td align="right">10</td>
</tr>
<tr>
<td align="left">12 others, each with fewer than ten posts</td>
<td align="right">56</td>
</tr>
</tbody>
</table>
<p>Tyler wrote <a href="https://marginalrevolution.com/marginalrevolution/2003/08/the_lunar_men">the first <em>Marginal Revolution</em> post</a> on August 21, 2003, and posted every day thereafter.
His monthly output grew during the late 2000s and early 2010s.
Alex’s monthly output was lower but relatively constant:</p>
<p><img src="figures/monthly-output-1.svg" alt=""></p>
<h2 id="titles">Titles</h2>
<p>My next chart compares the words used in Tyler and Alex’s posts’ titles.
Their posts often contained “assorted links” or “facts of the day,” or explained how there are “markets in everything.”
Tyler also had many posts on “sentences to ponder” and “what [he’d] been reading.”</p>
<p><img src="figures/titular-words-1.svg" alt=""></p>
<p>The longest title contained 21 words (“The Icelandic Stock Exchange fell by 76% in early trading as it re-opened after closing for two days last week.").
Tyler’s titles had a median of five words while Alex’s had a median of four.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup></p>
<h2 id="publication-times">Publication times</h2>
<p><em>Marginal Revolution</em> posts tended to appear in early mornings and afternoons.
Tyler posted at all hours of the day, albeit seldom at night.<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>
Alex’s posting schedule was more regular.
His posts usually appeared between 7am and 9am:</p>
<p><img src="figures/publication-times-1.svg" alt=""></p>
<h2 id="categories">Categories</h2>
<p>MRposts matches posts with their <a href="https://marginalrevolution.com/categories">categories</a>.
The most common categories were Economics (12,757 posts), Current Affairs (5,648 posts), and Law (3,706 posts).
About 52% of posts had two or more categories, while 16% had none.</p>
<p>The following chart compares the categories of Tyler and Alex’s posts.
I count posts “fractionally” so that, e.g., posts with two categories contribute half a post to each category.
Tyler wrote proportionally more non-Economics posts than Alex.</p>
<p><img src="figures/categories-1.svg" alt=""></p>
<h2 id="comments">Comments</h2>
<p>The median post in MRposts had 26 comments.
Tyler’s median post had 27 comments while Alex’s had 25.
About 11% of posts had more than 100 comments, while 26% had fewer than ten and 11% had none.
I list the most-commented-on posts in the table below.</p>
<table>
<thead>
<tr>
<th align="left">Post</th>
<th align="right">Year</th>
<th align="right">Comments</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left"><a href="https://marginalrevolution.com/marginalrevolution/2008/08/sarah-palin">Sarah Palin</a></td>
<td align="right">2008</td>
<td align="right">947</td>
</tr>
<tr>
<td align="left"><a href="https://marginalrevolution.com/marginalrevolution/2015/10/the-case-for-getting-rid-of-borders-completely">The Case for Getting Rid of Borders-Completely</a></td>
<td align="right">2015</td>
<td align="right">711</td>
</tr>
<tr>
<td align="left"><a href="https://marginalrevolution.com/marginalrevolution/2022/06/if-you-wish-to-debate-scotus-on-roe-v-wade">If you wish to debate SCOTUS on Roe v. Wade…</a></td>
<td align="right">2022</td>
<td align="right">577</td>
</tr>
<tr>
<td align="left"><a href="https://marginalrevolution.com/marginalrevolution/2022/10/classical-liberalism-vs-the-new-right">Classical liberalism vs. The New Right</a></td>
<td align="right">2022</td>
<td align="right">567</td>
</tr>
<tr>
<td align="left"><a href="https://marginalrevolution.com/marginalrevolution/2016/05/what-in-the-hell-is-going-on">What the hell is going on?</a></td>
<td align="right">2016</td>
<td align="right">562</td>
</tr>
<tr>
<td align="left"><a href="https://marginalrevolution.com/marginalrevolution/2016/11/upward-mobility-discrimination-asians-african-americans">Upward Mobility and Discrimination: Asians and African Americans</a></td>
<td align="right">2016</td>
<td align="right">548</td>
</tr>
<tr>
<td align="left"><a href="https://marginalrevolution.com/marginalrevolution/2022/12/cwt-bleg">CWT bleg</a></td>
<td align="right">2022</td>
<td align="right">534</td>
</tr>
<tr>
<td align="left"><a href="https://marginalrevolution.com/marginalrevolution/2014/08/ferguson-and-the-debtors-prison">Ferguson and the Modern Debtor’s Prison</a></td>
<td align="right">2014</td>
<td align="right">525</td>
</tr>
<tr>
<td align="left"><a href="https://marginalrevolution.com/marginalrevolution/2016/11/trump-winning-rises-falls-status">Trump winning: who rises and falls in status?</a></td>
<td align="right">2016</td>
<td align="right">520</td>
</tr>
<tr>
<td align="left"><a href="https://marginalrevolution.com/marginalrevolution/2016/06/what-is-neo-reaction">What is neo-reaction?</a></td>
<td align="right">2016</td>
<td align="right">519</td>
</tr>
</tbody>
</table>
<p>Three of the ten most-commented-on posts were published in the last year.
Indeed, the mean number of comments per post grew over time:</p>
<p><img src="figures/comments-growth-1.svg" alt=""></p>
<p>Post engagement grew slowly during the late 2010s.
It increased sharply in early 2011, when Tyler was <a href="https://www.economist.com/free-exchange/2011/02/01/economics-most-influential-people">listed among the most influential economists</a>.</p>
<h2 id="content">Content</h2>
<p>I could update MRposts to include data on posts’ content.
This would allow users to mine the text of Tyler and Alex’s posts.
For example, many commenters have decried Tyler’s recent focus on ChatGPT and other large language models.
I document that focus in the chart below.
It shows the share of Tyler’s posts containing the string “chat”, “GPT”, “LLM”, or “language model” in each of the past 24 months.
The majority of those posts contained none of those strings!</p>
<p><img src="figures/focus-1.svg" alt=""></p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>Mark Nagelberg <a href="https://www.marknagelberg.com/lets-scrape-a-blog-part-1/">compares</a> the mean lengths of all authors’ titles. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>Hamilton Noel <a href="https://hamiltonnoel.substack.com/p/does-tyler-cowen-sleep">looks closer</a> at Tyler’s blogging habits. <a href="#fnref:2" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Five years of blogging
https://bldavies.com/blog/five-years-blogging/
Wed, 01 Mar 2023 00:00:00 +0000https://bldavies.com/blog/five-years-blogging/<p>Today marks five years since <a href="https://bldavies.com/blog/habitat-choices-first-generation-pokemon/">my first blog post</a>.
This post is my 100th.
It summarizes the <a href="#words-used">words I’ve used</a> and <a href="#traffic">traffic I’ve received</a>.</p>
<h2 id="words-used">Words used</h2>
<p>My first 99 posts contained more than 56 thousand words:</p>
<p><img src="figures/growth-1.svg" alt=""></p>
<p>I wrote 11 posts in March and April 2020, when the pandemic forced me to “work” from home.
I’ve written 56 posts—about once every 16 days—since <a href="https://bldavies.com/blog/stanford/">starting my PhD</a> in September 2020.</p>
<p>My <a href="https://bldavies.com/blog/applying-economics-phd-programs">longest post</a> had 2,128 words and my <a href="https://bldavies.com/blog/transitivity-positive-correlations">shortest</a> had 123.
The most common (non-<a href="https://en.wikipedia.org/wiki/Stop_word">stop</a>) word was “network,” used 269 times across 34 distinct posts.
The chart below shows the six most common words overall and among posts on my most common topics.
It includes “datum” rather than “data” because I <a href="https://en.wikipedia.org/wiki/Lemmatisation">lemmatize</a> words before counting them.</p>
<p><img src="figures/common-words-1.svg" alt=""></p>
<p>So far I’ve written 34 posts on <a href="https://bldavies.com/topics/economics">economics</a> and 31 on <a href="https://bldavies.com/topics/networks">networks</a>.
Most posts had multiple topics.
The most commonly paired topics were networks and <a href="https://bldavies.com/topics/research">research</a> (eight posts), research and <a href="https://bldavies.com/topics/software">software</a> (six posts), and networks and <a href="https://bldavies.com/topics/statistics">statistics</a> (six posts).</p>
<h2 id="traffic">Traffic</h2>
<p>Since March 2020 I’ve used <a href="https://www.goatcounter.com">GoatCounter</a> to count page views and visitors.
I had lots in late 2022, when I shared my <a href="https://bldavies.com/blog/reflections-grad-school-years-1-2">reflections on graduate school</a> and people started <a href="https://bldavies.com/blog/applying-economics-phd-programs">applying to economics PhD programs</a>:</p>
<p><img src="figures/goatcounts-1.svg" alt=""></p>
<p>My most popular three posts benefit from being in the top few Google search results.
They account for about half of my (non-bot) page views:</p>
<table>
<thead>
<tr>
<th align="left">Post</th>
<th align="right">Views</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left"><a href="https://bldavies.com/blog/applying-economics-phd-programs">Applying to economics PhD programs</a></td>
<td align="right">4,174</td>
</tr>
<tr>
<td align="left"><a href="https://bldavies.com/blog/accessing-strava-api">Accessing the Strava API with R</a></td>
<td align="right">1,831</td>
</tr>
<tr>
<td align="left"><a href="https://bldavies.com/blog/greedy-pig-strategies">Greedy Pig strategies</a></td>
<td align="right">1,189</td>
</tr>
<tr>
<td align="left"><a href="https://bldavies.com/blog/reflections-grad-school-years-1-2">Reflections on grad school: Years 1 and 2</a></td>
<td align="right">411</td>
</tr>
<tr>
<td align="left"><a href="https://bldavies.com/blog/stanford">Stanford</a></td>
<td align="right">387</td>
</tr>
<tr>
<td align="left"><a href="https://bldavies.com/blog/degroot-learning-social-networks">DeGroot learning in social networks</a></td>
<td align="right">341</td>
</tr>
<tr>
<td align="left"><a href="https://bldavies.com/blog/ordinary-total-least-squares">Ordinary and total least squares</a></td>
<td align="right">336</td>
</tr>
<tr>
<td align="left"><a href="https://bldavies.com/blog/female-representation-collaboration-nber">Female representation and collaboration at the NBER</a></td>
<td align="right">318</td>
</tr>
<tr>
<td align="left"><a href="https://bldavies.com/blog/living-america">What’s it like living in America?</a></td>
<td align="right">296</td>
</tr>
<tr>
<td align="left"><a href="https://bldavies.com/blog/how-central-grand-central-terminal">How central is Grand Central Terminal?</a></td>
<td align="right">280</td>
</tr>
<tr>
<td align="left">Other</td>
<td align="right">4,747</td>
</tr>
<tr>
<td align="left">Total</td>
<td align="right">14,310</td>
</tr>
</tbody>
</table>
<p>Most of my visitors were from the USA (usually California or Massachusetts):</p>
<table>
<thead>
<tr>
<th align="left">Country</th>
<th align="right">Visitors</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">United States</td>
<td align="right">6,216</td>
</tr>
<tr>
<td align="left">United Kingdom</td>
<td align="right">770</td>
</tr>
<tr>
<td align="left">Australia</td>
<td align="right">642</td>
</tr>
<tr>
<td align="left">New Zealand</td>
<td align="right">561</td>
</tr>
<tr>
<td align="left">India</td>
<td align="right">294</td>
</tr>
<tr>
<td align="left">Other/unknown</td>
<td align="right">3,856</td>
</tr>
<tr>
<td align="left">Total</td>
<td align="right">12,339</td>
</tr>
</tbody>
</table>
Selection bias and fixed effects
https://bldavies.com/blog/selection-bias-fixed-effects/
Wed, 25 Jan 2023 00:00:00 +0000https://bldavies.com/blog/selection-bias-fixed-effects/<p>Economists often use <a href="https://en.wikipedia.org/wiki/Fixed_effects_model">fixed effects</a> to correct for <a href="https://bldavies.com/blog/understanding-selection-bias">selection bias</a>.
Intuitively, these effects “partial out” the reasons why our data include some observations but not others.
But this intuition relies on the selection criteria being linear functions of the dependent variable.</p>
<p>For example, suppose I have panel data on 100 individuals <code>\(i\)</code> at ten dates <code>\(t\)</code>.
These data include pairs <code>\((y_{it},x_{it})\)</code> generated by the process
<code>$$y_{it}=x_{it}+u_i+\epsilon_{it},$$</code>
where <code>\(u_i\)</code> is a fixed effect and <code>\(\epsilon_{it}\)</code> is an error term.
The <code>\(x_{it}\)</code>, <code>\(u_i\)</code>, and <code>\(\epsilon_{it}\)</code> are iid normal with zero mean and unit variance.
They all vary across individuals.
The <code>\(x_{it}\)</code> and <code>\(\epsilon_{it}\)</code> also vary over time, but the <code>\(u_i\)</code> do not.</p>
<p>The chart below plots <code>\(y_{it}\)</code> against <code>\(x_{it}\)</code> overall and within two subsets of my data:</p>
<ol>
<li>Observations for the 50 individuals <code>\(i\)</code> whose outcomes <code>\(y_{it}\)</code> have the largest mean;</li>
<li>Observations for the 50 individuals <code>\(i\)</code> whose <em>squared</em> outcomes <code>\(y_{it}^2\)</code> have the largest mean.</li>
</ol>
<p>It also shows the OLS regression line fitted to my data and its subsets.
The intercept and slope of this line depend on the selection criterion.
Individuals with larger mean outcomes tend to have larger fixed effects and narrower error distributions.
This leads OLS to estimate a higher intercept but shallower slope than in the full data.
In contrast, individuals with larger mean squared outcomes have similar fixed effects to other individuals but wider error distributions.
This leads OLS to estimate the same intercept but steeper slope than in the full data.</p>
<p><img src="figures/binscatter-1.svg" alt=""></p>
<p>What if I include fixed effects in my regression?
The box plots below summarize the slopes I estimate when I simulate my data 100 times and apply my selection criteria.
Including fixed effects removes the bias from selecting on mean outcomes.
This is because the fixed effects <em>are</em> the variables I select on.
Partialing them out removes the selection bias by definition.
In contrast, including fixed effects does not remove the bias from selecting on mean squared outcomes.
This is because the fixed effects are uncorrelated with the variables I select on.
Partialing them out removes noise but not bias.</p>
<p><img src="figures/boxplot-1.svg" alt=""></p>
Learning and persuasion
https://bldavies.com/blog/learning-persuasion/
Sun, 08 Jan 2023 00:00:00 +0000https://bldavies.com/blog/learning-persuasion/<p>People talk for many reasons.
One is to learn: to collect information that helps us make better choices.
Another is to <a href="https://bldavies.com/blog/persuading-anecdotes">persuade</a>: to convince others to make choices we think are best.</p>
<p><a href="https://doi.org/10.1016/j.geb.2021.01.001">Meng (2021)</a> shows how wanting to learn and persuade can lead to <a href="https://en.wikipedia.org/wiki/Homophily">homophily</a>.
He presents a model in which people choose conversation partners before taking actions.
Everyone wants these actions to match an unknown binary state.
But people have different prior beliefs about the state.
They update their beliefs after receiving (i) a <a href="https://bldavies.com/blog/learning-noisy-signals/">noisy signal</a> from nature and (ii) a message from their partner.
Priors are public, signals are private, and messages are <a href="https://doi.org/10.1257/aer.101.6.2590">designed to be persuasive</a>.</p>
<p>Meng studies the <a href="https://bldavies.com/blog/stable-matchings/">matchings</a> that arise in this setting.
A matching is “stable” if it has no “blocking pairs:” people who want to be partners but aren’t.
It is “<a href="https://bldavies.com/blog/assortative-mixing/">assortative</a>” if all partners are “like-minded:” their priors are both close to zero or both close to one.</p>
<p>Every assortative matching is stable.
To see why, suppose Alice and Bob are not like-minded.
Alice will only partner with Bob if it’s easier to persuade him than be persuaded by him.
But Bob will only partner with Alice if it’s easier to persuade her than be persuaded by her.
These two conditions can’t hold at the same time, so Alice and Bob can’t form a blocking pair.</p>
<p>Likewise, in Meng’s model, every stable matching is assortative.
This is especially true if people care more about learning than persuading.
Like-minded partners send truthful messages because they don’t need to persuade each other.
But non-like-minded partners send distorted messages hoping to persuade each other.
These distortions make at least one person worse off than they would be if they had a like-minded partner who told them the truth.</p>
<p>Meng then considers a social planner who can choose matchings but not messages.
This planner wants to maximize the sum of everyone’s expected payoffs under their priors.
They choose an assortative matching only when the distribution of priors is symmetric.
Otherwise, they choose a matching in which people with extreme priors have non-like-minded partners with moderate priors.
This is because extremists gain more than moderates lose.
It suggests that sorting is socially bad.</p>
<p>Finally, Meng extends his model to allow stable matchings that are <em>not</em> assortative.
This can happen when signals or actions are not binary.
He leaves open the extension to settings in which people have more than one partner.
<a href="https://bldavies.com/blog/echo-chambers-useful/">Jann and Schottmüller (2021)</a> consider a version of that setting.
But they reach a different normative conclusion than Meng: sorting can be good because it stops people from sending distorted messages.</p>
Protecting Planet Xiddler
https://bldavies.com/blog/protecting-planet-xiddler/
Sat, 07 Jan 2023 00:00:00 +0000https://bldavies.com/blog/protecting-planet-xiddler/<p>This week’s <a href="https://fivethirtyeight.com/features/can-you-fend-off-the-alien-armada/">Riddler Classic</a> asks us to fend off an alien invasion:</p>
<blockquote>
<p>The astronomers of Planet Xiddler are back in action!
Unfortunately, this time they have used their telescopes to spot an armada of hostile alien warships on a direct course for Xiddler.
The armada will be arriving in exactly 100 days.
(Recall that, like Earth, there are 24 hours in a Xiddler day.)</p>
<p>Fortunately, Xiddler’s engineers have just completed construction of the planet’s first assembler, which is capable of producing any object.
An assembler can be used to build a space fighter to defend the planet, which takes one hour to produce.
An assembler can also be used to build another assembler (which, in turn, can build other space fighters or assemblers).
However, building an assembler is more time-consuming, requiring six whole days.
Also, you cannot use multiple assemblers to build one space fighter or assembler in a shorter period of time.</p>
<p>What is the greatest number of space fighters the Xiddlerian fleet can have when the alien armada arrives?</p>
</blockquote>
<p>We can solve this problem via <a href="https://en.wikipedia.org/wiki/Dynamic_programming">dynamic programming</a>.
First, let <code>\(N(t)\)</code> be the maximum number of fighters an assembler can make in <code>\(t\)</code> days.
The aliens invade in 100 days, so our goal is to compute <code>\(N(100)\)</code>.</p>
<p>An assembler can either</p>
<ol>
<li>spend a day building fighters, or</li>
<li>spend six days duplicating itself.</li>
</ol>
<p>The first option gives us 24 fighters plus however many an assembler can make in <code>\((t-1)\)</code> days.
The second option gives us however many <em>two</em> assemblers can make in <code>\((t-6)\)</code> days.
Thus <code>\(N(t)\)</code> satisfies the <a href="https://en.wikipedia.org/wiki/Bellman_equation">Bellman equation</a>
<code>$$N(t)=\max\{24+N(t-1),2N(t-6)\},$$</code>
where <code>\(N(t)=0\)</code> for all <code>\(t\le0\)</code>.
Solving this equation recursively gives
<code>$$N(100)=7,\!864,\!320.$$</code>
The chart below shows how <code>\(N(t)\)</code> grows with <code>\(t\)</code>:</p>
<p><img src="figures/growth-1.svg" alt=""></p>
<p>Fighter production begins with a 90-day “duplicate” phase in which the number of assemblers doubles 15 times: once every six days.
This gives us
<code>$$2^{15}=32,\!768$$</code>
assemblers to use during a 10-day “build” phase in which each builds 24 fighters per day, giving us
<code>$$2^{15}\times10\times24=7,\!864,\!320$$</code>
fighters in total.</p>
<p>The length of the build phase depends on how quickly an assembler can build fighters or duplicate itself.
For example, if it takes only three days to duplicate then the build phase lasts only four days.
This is because the opportunity cost of duplicating (not building now) falls relative to the benefit of duplicating (building twice as fast later).
The opposite is true if it takes more than six days to duplicate or if assemblers can build more than 24 fighters per day.</p>
Learning from opinions
https://bldavies.com/blog/learning-opinions/
Fri, 06 Jan 2023 00:00:00 +0000https://bldavies.com/blog/learning-opinions/<p>We often use others’ opinions to guide our choices.
For example, we use movie and Yelp reviews to decide what to watch and where to eat.
But opinions can be hard to interpret because they depend on objective facts (e.g., movie/food quality) and subjective perspectives (e.g., reviewers’ tastes).
So, when seeking opinions, we face a trade-off between</p>
<ol>
<li>“well-informed” sources who know a lot and</li>
<li>“well-understood” sources with known perspectives.</li>
</ol>
<p><a href="https://doi.org/10.3982/ECTA13320">Sethi and Yildiz (2016)</a> study this trade-off and its consequences.
They consider a group of people who receive <a href="https://bldavies.com/blog/learning-noisy-signals/">noisy signals</a> about a sequence of states.
These people form posterior beliefs (“opinions”) about each state based on their signal precisions (“expertise”) and prior beliefs (“perspectives”).
Expertise is public, and varies across people and states.
Perspectives are private, and vary across people but not states.
Everyone observes their own opinion and the opinion of a chosen “target.”
They always choose the target whose opinion reveals the most information about the current state.</p>
<p>Initially, no-one knows anyone else’s perspective, so everyone chooses the target with the most expertise (i.e., the most precise signal).
But people learn others’ perspectives over time by comparing the signals they receive to the opinions they observe.
Eventually, everyone attaches to a set of “long-run experts” and never considers opinions outside that set, even if those opinions are better informed.</p>
<p>This set of long-run experts can vary across people.
To see why, suppose Alice and Bob observe Charlie’s opinion about a given state.
Alice and Charlie receive precise signals about that state, but Bob doesn’t.
Alice knows that her opinion can only differ from Charlie’s if they have different perspectives.
In contrast, Bob can’t tell if his opinion differs from Charlie’s because they have different perspectives or because Bob’s signal is imprecise.
So Alice learns more about Charlie’s perspective than Bob does.
She’s more likely to include Charlie in her set of long-run experts.</p>
<p>Sethi and Yildiz’s model explains why people gravitate to like-minded opinion sources.
We learn more about the perspectives of people who know about the same things, making us more likely to attach to them.
This contrasts with the <a href="https://bldavies.com/blog/ideological-bias-trust-information-sources/">trust</a>- and <a href="https://bldavies.com/blog/persuading-anecdotes/">persuasion</a>-based explanations discussed in previous posts.
It leads people to ask experts for opinions on topics beyond their expertise.
It may also lead people to <a href="https://bldavies.com/blog/truth-seekers-ideologues/">befriend fellow ideologues</a> who see the world the same (possibly incorrect) way.</p>
stravadata demo
https://bldavies.com/blog/stravadata-demo/
Sun, 01 Jan 2023 00:00:00 +0000https://bldavies.com/blog/stravadata-demo/<p><a href="https://github.com/bldavies/stravadata">stravadata</a> is an R package I use to organize and analyze my <a href="https://www.strava.com/">Strava</a> activity data.
This post offers some example analyses:</p>
<ul>
<li><a href="#computing-annual-totals">Computing annual totals</a></li>
<li><a href="#making-activity-heat-maps">Making activity heat maps</a></li>
<li><a href="#counting-efforts">Counting efforts</a></li>
<li><a href="#making-training-calendars">Making training calendars</a></li>
<li><a href="#tracking-personal-records">Tracking personal records</a></li>
</ul>
<p>My examples use data on my running activities from the last five years:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">dplyr</span><span class="p">)</span>
<span class="nf">library</span><span class="p">(</span><span class="n">lubridate</span><span class="p">)</span>
<span class="nf">library</span><span class="p">(</span><span class="n">stravadata</span><span class="p">)</span>
<span class="n">runs</span> <span class="o">=</span> <span class="n">activities</span> <span class="o">%>%</span>
<span class="nf">filter</span><span class="p">(</span><span class="n">type</span> <span class="o">==</span> <span class="s">'Run'</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">mutate</span><span class="p">(</span><span class="n">year</span> <span class="o">=</span> <span class="nf">year</span><span class="p">(</span><span class="n">start_time</span><span class="p">),</span>
<span class="n">date</span> <span class="o">=</span> <span class="nf">date</span><span class="p">(</span><span class="n">start_time</span><span class="p">))</span> <span class="o">%>%</span>
<span class="nf">filter</span><span class="p">(</span><span class="n">year</span> <span class="o">%in%</span> <span class="m">2018</span><span class="o">:</span><span class="m">2022</span><span class="p">)</span>
</code></pre></div><h2 id="computing-annual-totals">Computing annual totals</h2>
<p><code>runs</code> contains activity-level features like distance traveled and time spent moving.
I sum these features by year, then use <code>knitr::kable</code> to display these sums in a table:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">knitr</span><span class="p">)</span>
<span class="n">runs</span> <span class="o">%>%</span>
<span class="nf">group_by</span><span class="p">(</span><span class="n">Year</span> <span class="o">=</span> <span class="n">year</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">summarise</span><span class="p">(</span><span class="n">Runs</span> <span class="o">=</span> <span class="nf">n</span><span class="p">(),</span>
<span class="nf">`Distance </span><span class="p">(</span><span class="n">km</span><span class="p">)</span><span class="n">` = sum(distance) / 1e3,
</span><span class="n"> `</span><span class="nf">Time </span><span class="p">(</span><span class="n">hours</span><span class="p">)</span><span class="n">`</span> <span class="o">=</span> <span class="nf">sum</span><span class="p">(</span><span class="n">time_moving</span><span class="p">)</span> <span class="o">/</span> <span class="m">3600</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">mutate_at</span><span class="p">(</span><span class="m">3</span><span class="o">:</span><span class="m">4</span><span class="p">,</span> <span class="o">~</span><span class="nf">format</span><span class="p">(</span><span class="nf">round</span><span class="p">(</span><span class="n">.)</span><span class="p">,</span> <span class="n">big.mark</span> <span class="o">=</span> <span class="s">','</span><span class="p">))</span> <span class="o">%>%</span>
<span class="nf">kable</span><span class="p">(</span><span class="n">align</span> <span class="o">=</span> <span class="s">'crrr'</span><span class="p">)</span>
</code></pre></div><table>
<thead>
<tr>
<th align="center">Year</th>
<th align="right">Runs</th>
<th align="right">Distance (km)</th>
<th align="right">Time (hours)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center">2018</td>
<td align="right">68</td>
<td align="right">544</td>
<td align="right">52</td>
</tr>
<tr>
<td align="center">2019</td>
<td align="right">152</td>
<td align="right">1,085</td>
<td align="right">92</td>
</tr>
<tr>
<td align="center">2020</td>
<td align="right">224</td>
<td align="right">2,026</td>
<td align="right">172</td>
</tr>
<tr>
<td align="center">2021</td>
<td align="right">207</td>
<td align="right">2,149</td>
<td align="right">173</td>
</tr>
<tr>
<td align="center">2022</td>
<td align="right">145</td>
<td align="right">1,517</td>
<td align="right">120</td>
</tr>
</tbody>
</table>
<h2 id="making-activity-heat-maps">Making activity heat maps</h2>
<p>I record my runs with a watch that tracks my GPS coordinates.
stravadata stores these coordinates in <code>streams</code>.
For example, here’s the course for last year’s <a href="https://www.paloaltoonline.com/moonlight_run/">Moonlight Run</a> in Palo Alto:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">ggplot2</span><span class="p">)</span>
<span class="n">p</span> <span class="o">=</span> <span class="n">runs</span> <span class="o">%>%</span>
<span class="nf">filter</span><span class="p">(</span><span class="n">name</span> <span class="o">==</span> <span class="s">'Moonlight Run'</span> <span class="o">&</span> <span class="n">year</span> <span class="o">==</span> <span class="m">2022</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">select</span><span class="p">(</span><span class="n">id</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">left_join</span><span class="p">(</span><span class="n">streams</span><span class="p">,</span> <span class="n">by</span> <span class="o">=</span> <span class="s">'id'</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">ggplot</span><span class="p">(</span><span class="nf">aes</span><span class="p">(</span><span class="n">lon</span><span class="p">,</span> <span class="n">lat</span><span class="p">))</span> <span class="o">+</span>
<span class="nf">geom_path</span><span class="p">()</span>
<span class="nf">plot_nicely</span><span class="p">(</span><span class="n">p</span><span class="p">)</span> <span class="c1"># Add text and formatting</span>
</code></pre></div><p><img src="figures/moonlight-run-1.svg" alt=""></p>
<p>Combining the GPS coordinates from many runs yields a local map.
For example, suppose I want to map my runs near <a href="https://bldavies.com/blog/stanford/">Stanford</a>.
I first make a table of GPS paths near a local landmark:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">coords</span> <span class="o">=</span> <span class="nf">c</span><span class="p">(</span><span class="m">-122.16</span><span class="p">,</span> <span class="m">37.44</span><span class="p">)</span> <span class="c1"># Trader Joe's</span>
<span class="n">tol</span> <span class="o">=</span> <span class="m">0.08</span>
<span class="n">stanford_paths</span> <span class="o">=</span> <span class="n">streams</span> <span class="o">%>%</span>
<span class="nf">semi_join</span><span class="p">(</span><span class="n">runs</span><span class="p">,</span> <span class="n">by</span> <span class="o">=</span> <span class="s">'id'</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">mutate</span><span class="p">(</span><span class="n">step</span> <span class="o">=</span> <span class="nf">row_number</span><span class="p">())</span> <span class="o">%>%</span>
<span class="nf">filter</span><span class="p">(</span><span class="nf">sqrt</span><span class="p">((</span><span class="n">lon</span> <span class="o">-</span> <span class="n">coords[1]</span><span class="p">)</span> <span class="n">^</span> <span class="m">2</span> <span class="o">+</span> <span class="p">(</span><span class="n">lat</span> <span class="o">-</span> <span class="n">coords[2]</span><span class="p">)</span> <span class="n">^</span> <span class="m">2</span><span class="p">)</span> <span class="o"><</span> <span class="n">tol</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">filter</span><span class="p">(</span><span class="n">lon</span> <span class="o">!=</span> <span class="nf">lag</span><span class="p">(</span><span class="n">lon</span><span class="p">)</span> <span class="o">|</span> <span class="n">lat</span> <span class="o">!=</span> <span class="nf">lag</span><span class="p">(</span><span class="n">lat</span><span class="p">))</span> <span class="o">%>%</span> <span class="c1"># Remove pauses</span>
<span class="nf">mutate</span><span class="p">(</span><span class="n">new_path</span> <span class="o">=</span> <span class="nf">row_number</span><span class="p">()</span> <span class="o">==</span> <span class="m">1</span> <span class="o">|</span> <span class="n">id</span> <span class="o">!=</span> <span class="nf">lag</span><span class="p">(</span><span class="n">id</span><span class="p">)</span> <span class="o">|</span> <span class="n">step</span> <span class="o">!=</span> <span class="nf">lag</span><span class="p">(</span><span class="n">step</span><span class="p">)</span> <span class="o">+</span> <span class="m">1</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">mutate</span><span class="p">(</span><span class="n">path</span> <span class="o">=</span> <span class="nf">cumsum</span><span class="p">(</span><span class="n">new_path</span><span class="p">))</span> <span class="o">%>%</span>
<span class="nf">select</span><span class="p">(</span><span class="n">path</span><span class="p">,</span> <span class="n">lat</span><span class="p">,</span> <span class="n">lon</span><span class="p">)</span>
</code></pre></div><p>I increment <code>path</code> every time I start a new run, unpause a previous run, or re-enter the area defined by <code>coords</code> and <code>tol</code>.
I use <code>path</code> as a grouping variable so that <code>ggplot2::ggplot</code> knows to draw each path separately.
I then use the <code>alpha</code> argument of <code>ggplot2::geom_path</code> to create a “heat map” of paths I run most often:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">p</span> <span class="o">=</span> <span class="n">stanford_paths</span> <span class="o">%>%</span>
<span class="nf">ggplot</span><span class="p">(</span><span class="nf">aes</span><span class="p">(</span><span class="n">lon</span><span class="p">,</span> <span class="n">lat</span><span class="p">,</span> <span class="n">group</span> <span class="o">=</span> <span class="n">path</span><span class="p">))</span> <span class="o">+</span>
<span class="nf">geom_path</span><span class="p">(</span><span class="n">alpha</span> <span class="o">=</span> <span class="m">0.1</span><span class="p">)</span>
<span class="nf">plot_nicely</span><span class="p">(</span><span class="n">p</span><span class="p">)</span>
</code></pre></div><p><img src="figures/stanford-1.jpeg" alt=""></p>
<h2 id="counting-efforts">Counting efforts</h2>
<p><code>best_efforts</code> stores my fastest times running a range of distances (that Strava calls “efforts”) within each activity:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">head</span><span class="p">(</span><span class="n">best_efforts</span><span class="p">)</span>
</code></pre></div><pre><code>## # A tibble: 6 × 4
## id effort start_index end_index
## <dbl> <chr> <int> <int>
## 1 1253004287 1 mile 15 447
## 2 1253004287 1/2 mile 11 232
## 3 1253004287 1k 12 284
## 4 1253004287 2 mile 11 876
## 5 1253004287 400m 11 120
## 6 1253004287 5k 11 1342
</code></pre><p>The <code>id</code> column stores activity IDs and the <code>effort</code> column stores effort descriptions.
I focus on 5k, 10k, and half marathon efforts:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">focal_efforts</span> <span class="o">=</span> <span class="nf">c</span><span class="p">(</span><span class="s">'5k'</span><span class="p">,</span> <span class="s">'10k'</span><span class="p">,</span> <span class="s">'Half-Marathon'</span><span class="p">)</span>
<span class="n">efforts</span> <span class="o">=</span> <span class="n">runs</span> <span class="o">%>%</span>
<span class="nf">left_join</span><span class="p">(</span><span class="n">best_efforts</span><span class="p">,</span> <span class="n">by</span> <span class="o">=</span> <span class="s">'id'</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">filter</span><span class="p">(</span><span class="n">effort</span> <span class="o">%in%</span> <span class="n">focal_efforts</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">mutate</span><span class="p">(</span><span class="n">effort</span> <span class="o">=</span> <span class="nf">factor</span><span class="p">(</span><span class="n">effort</span><span class="p">,</span> <span class="n">focal_efforts</span><span class="p">))</span> <span class="o">%>%</span>
<span class="nf">select</span><span class="p">(</span><span class="n">year</span><span class="p">,</span> <span class="n">date</span><span class="p">,</span> <span class="n">id</span><span class="p">,</span> <span class="n">effort</span><span class="p">,</span> <span class="n">start_index</span><span class="p">,</span> <span class="n">end_index</span><span class="p">)</span>
</code></pre></div><p><code>efforts</code> inherits the <code>year</code> variable from <code>runs</code>.
I use this variable to count efforts within each year.
I then use <code>tidyr::spread</code> and <code>knitr::kable</code> to display these counts in a table:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">tidyr</span><span class="p">)</span>
<span class="n">efforts</span> <span class="o">%>%</span>
<span class="nf">count</span><span class="p">(</span><span class="n">Year</span> <span class="o">=</span> <span class="n">year</span><span class="p">,</span> <span class="n">effort</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">spread</span><span class="p">(</span><span class="n">effort</span><span class="p">,</span> <span class="n">n</span><span class="p">,</span> <span class="n">fill</span> <span class="o">=</span> <span class="m">0</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">kable</span><span class="p">(</span><span class="n">align</span> <span class="o">=</span> <span class="s">'c'</span><span class="p">)</span>
</code></pre></div><table>
<thead>
<tr>
<th align="center">Year</th>
<th align="center">5k</th>
<th align="center">10k</th>
<th align="center">Half-Marathon</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center">2018</td>
<td align="center">64</td>
<td align="center">24</td>
<td align="center">0</td>
</tr>
<tr>
<td align="center">2019</td>
<td align="center">136</td>
<td align="center">34</td>
<td align="center">2</td>
</tr>
<tr>
<td align="center">2020</td>
<td align="center">191</td>
<td align="center">88</td>
<td align="center">21</td>
</tr>
<tr>
<td align="center">2021</td>
<td align="center">200</td>
<td align="center">90</td>
<td align="center">25</td>
</tr>
<tr>
<td align="center">2022</td>
<td align="center">131</td>
<td align="center">85</td>
<td align="center">9</td>
</tr>
</tbody>
</table>
<h2 id="making-training-calendars">Making training calendars</h2>
<p><code>efforts</code> also inherits the <code>date</code> variable from <code>runs</code>.
I use this variable to create <a href="https://github.blog/2013-01-07-introducing-contributions/#contributions-calendar">GitHub-esque</a> training calendars.
For example, here’s my running calendar for 2021:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">p</span> <span class="o">=</span> <span class="n">efforts</span> <span class="o">%>%</span>
<span class="nf">filter</span><span class="p">(</span><span class="n">year</span> <span class="o">==</span> <span class="m">2021</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">group_by</span><span class="p">(</span><span class="n">date</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">slice_max</span><span class="p">(</span><span class="n">effort</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">distinct</span><span class="p">(</span><span class="n">effort</span><span class="p">)</span> <span class="o">%>%</span> <span class="c1"># I ran twice on some days</span>
<span class="nf">mutate</span><span class="p">(</span><span class="n">Week</span> <span class="o">=</span> <span class="nf">floor_date</span><span class="p">(</span><span class="n">date</span><span class="p">,</span> <span class="s">'weeks'</span><span class="p">,</span> <span class="n">week_start</span> <span class="o">=</span> <span class="m">1</span><span class="p">),</span>
<span class="n">Weekday</span> <span class="o">=</span> <span class="nf">wday</span><span class="p">(</span><span class="n">date</span><span class="p">,</span> <span class="n">label</span> <span class="o">=</span> <span class="bp">T</span><span class="p">,</span> <span class="n">week_start</span> <span class="o">=</span> <span class="m">1</span><span class="p">))</span> <span class="o">%>%</span>
<span class="nf">ggplot</span><span class="p">(</span><span class="nf">aes</span><span class="p">(</span><span class="n">Week</span><span class="p">,</span> <span class="n">Weekday</span><span class="p">))</span> <span class="o">+</span>
<span class="nf">geom_tile</span><span class="p">(</span><span class="nf">aes</span><span class="p">(</span><span class="n">alpha</span> <span class="o">=</span> <span class="n">effort</span><span class="p">),</span> <span class="n">col</span> <span class="o">=</span> <span class="s">'white'</span><span class="p">,</span> <span class="n">linewidth</span> <span class="o">=</span> <span class="m">0.5</span><span class="p">)</span>
<span class="nf">plot_nicely</span><span class="p">(</span><span class="n">p</span><span class="p">)</span>
</code></pre></div><p><img src="figures/calendar-1.svg" alt=""></p>
<p>I use <code>lubridate::floor_date</code> to identify weeks and <code>lubridate::wday</code> to identify weekdays.
The <code>col</code> and <code>size</code> arguments of <code>ggplot2::geom_tile</code> add space between tiles.</p>
<h2 id="tracking-personal-records">Tracking personal records</h2>
<p>I combine <code>runs</code>, <code>streams</code>, and <code>efforts</code> to track my record running paces over time.
I follow a three-step process:</p>
<p>First, I compute the mean pace for each effort.
I do this using the <code>start_index</code> and <code>end_index</code> columns that <code>efforts</code> inherits from <code>best_efforts</code>.
These columns tell me where each effort occurs in the corresponding activity’s stream:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">effort_paces</span> <span class="o">=</span> <span class="n">streams</span> <span class="o">%>%</span>
<span class="nf">filter</span><span class="p">(</span><span class="n">id</span> <span class="o">%in%</span> <span class="n">runs</span><span class="o">$</span><span class="n">id</span><span class="p">)</span> <span class="o">%>%</span>
<span class="c1"># Create indices</span>
<span class="nf">group_by</span><span class="p">(</span><span class="n">id</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">mutate</span><span class="p">(</span><span class="n">index</span> <span class="o">=</span> <span class="nf">row_number</span><span class="p">())</span> <span class="o">%>%</span>
<span class="nf">ungroup</span><span class="p">()</span> <span class="o">%>%</span>
<span class="c1"># Extract stream segment for each effort</span>
<span class="nf">inner_join</span><span class="p">(</span><span class="n">efforts</span><span class="p">,</span> <span class="n">by</span> <span class="o">=</span> <span class="s">'id'</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">filter</span><span class="p">(</span><span class="n">index</span> <span class="o">>=</span> <span class="n">start_index</span> <span class="o">&</span> <span class="n">index</span> <span class="o"><=</span> <span class="n">end_index</span><span class="p">)</span> <span class="o">%>%</span>
<span class="c1"># Compute mean paces</span>
<span class="nf">group_by</span><span class="p">(</span><span class="n">id</span><span class="p">,</span> <span class="n">date</span><span class="p">,</span> <span class="n">effort</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">summarise</span><span class="p">(</span><span class="n">distance</span> <span class="o">=</span> <span class="nf">max</span><span class="p">(</span><span class="n">distance</span><span class="p">)</span> <span class="o">-</span> <span class="nf">min</span><span class="p">(</span><span class="n">distance</span><span class="p">),</span>
<span class="n">time</span> <span class="o">=</span> <span class="nf">max</span><span class="p">(</span><span class="n">time</span><span class="p">)</span> <span class="o">-</span> <span class="nf">min</span><span class="p">(</span><span class="n">time</span><span class="p">))</span> <span class="o">%>%</span>
<span class="nf">ungroup</span><span class="p">()</span> <span class="o">%>%</span>
<span class="nf">mutate</span><span class="p">(</span><span class="n">pace</span> <span class="o">=</span> <span class="p">(</span><span class="n">time</span> <span class="o">/</span> <span class="m">60</span><span class="p">)</span> <span class="o">/</span> <span class="p">(</span><span class="n">distance</span> <span class="o">/</span> <span class="m">1e3</span><span class="p">))</span>
<span class="nf">head</span><span class="p">(</span><span class="n">effort_paces</span><span class="p">)</span>
</code></pre></div><pre><code>## # A tibble: 6 × 6
## id date effort distance time pace
## <dbl> <date> <fct> <dbl> <dbl> <dbl>
## 1 1335437333 2018-01-01 5k 5002. 1442 4.81
## 2 1338123783 2018-01-03 5k 5000. 1605 5.35
## 3 1344338907 2018-01-07 5k 5000. 1455 4.85
## 4 1347622521 2018-01-09 5k 5000 1493 4.98
## 5 1353889714 2018-01-13 5k 5001. 1622 5.41
## 6 1353889714 2018-01-13 10k 10001. 3380 5.63
</code></pre><p>The values in the <code>distance</code> column differ slightly from the descriptions in the <code>effort</code> column.
This is because the stream segment doesn’t always cover the described distance exactly.
But the multiplicative errors in <code>distance</code> and <code>time</code> should be equal on average, making <code>pace</code> is an unbiased estimate of my true mean pace.
I measure this pace in minutes per kilometer.</p>
<p>Next, I extract my record paces by deleting efforts slower than my previous best:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">record_paces</span> <span class="o">=</span> <span class="n">effort_paces</span> <span class="o">%>%</span>
<span class="nf">group_by</span><span class="p">(</span><span class="n">effort</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">arrange</span><span class="p">(</span><span class="n">date</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">filter</span><span class="p">(</span><span class="n">pace</span> <span class="o">==</span> <span class="nf">cummin</span><span class="p">(</span><span class="n">pace</span><span class="p">))</span> <span class="o">%>%</span>
<span class="nf">ungroup</span><span class="p">()</span>
</code></pre></div><p>Finally, I “fill in the gaps” by adding days on which I <em>don’t</em> set a new record.
I do this using <code>tidyr::crossing</code> and <code>tidyr::fill</code>:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">date_range</span> <span class="o">=</span> <span class="nf">seq</span><span class="p">(</span><span class="nf">date</span><span class="p">(</span><span class="s">'2018-01-01'</span><span class="p">),</span> <span class="nf">date</span><span class="p">(</span><span class="s">'2022-12-31'</span><span class="p">),</span> <span class="n">by</span> <span class="o">=</span> <span class="s">'day'</span><span class="p">)</span>
<span class="n">record_paces_filled</span> <span class="o">=</span> <span class="nf">crossing</span><span class="p">(</span><span class="n">date</span> <span class="o">=</span> <span class="n">date_range</span><span class="p">,</span> <span class="n">effort</span> <span class="o">=</span> <span class="n">focal_efforts</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">left_join</span><span class="p">(</span><span class="n">record_paces</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">group_by</span><span class="p">(</span><span class="n">effort</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">fill</span><span class="p">(</span><span class="n">pace</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">filter</span><span class="p">(</span><span class="o">!</span><span class="nf">is.na</span><span class="p">(</span><span class="n">pace</span><span class="p">))</span>
</code></pre></div><p><code>record_paces</code> and <code>record_paces_filled</code> differ in that the latter includes date-effort pairs with no new records.
This makes <code>record_paces_filled</code> produce horizontal lines when I plot its data:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">p</span> <span class="o">=</span> <span class="n">record_paces_filled</span> <span class="o">%>%</span>
<span class="nf">ggplot</span><span class="p">(</span><span class="nf">aes</span><span class="p">(</span><span class="n">date</span><span class="p">,</span> <span class="n">pace</span><span class="p">,</span> <span class="n">group</span> <span class="o">=</span> <span class="n">effort</span><span class="p">))</span> <span class="o">+</span>
<span class="nf">geom_line</span><span class="p">()</span>
<span class="nf">plot_nicely</span><span class="p">(</span><span class="n">p</span><span class="p">)</span>
</code></pre></div><p><img src="figures/record-paces-1.svg" alt=""></p>
Social networks in rural India
https://bldavies.com/blog/social-networks-rural-india/
Sat, 24 Dec 2022 00:00:00 +0000https://bldavies.com/blog/social-networks-rural-india/<p><a href="https://github.com/bldavies/IndianVillages">IndianVillages</a> is a new R package containing data on social networks in rural India.
I derived these data from <a href="https://doi.org/10.1126/science.1236498">Banerjee et al.‘s (2013)</a> surveys of households across 75 <a href="https://en.wikipedia.org/wiki/Karnataka">Karnatakan</a> villages.
This post describes the derived data and the networks they define.
I also show that the networks are <a href="https://bldavies.com/blog/assortative-mixing/">assortatively mixed</a> with respect to <a href="https://en.wikipedia.org/wiki/Caste_system_in_India">caste</a>.</p>
<h2 id="data-description">Data description</h2>
<p>IndianVillages provides two tables.
The first, <code>households</code>, links each household to its village and caste:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">dplyr</span><span class="p">)</span>
<span class="nf">library</span><span class="p">(</span><span class="n">IndianVillages</span><span class="p">)</span>
<span class="nf">head</span><span class="p">(</span><span class="n">households</span><span class="p">)</span>
</code></pre></div><pre><code>## # A tibble: 6 × 3
## hhid village caste
## <dbl> <dbl> <chr>
## 1 1001 1 <NA>
## 2 1002 1 <NA>
## 3 1003 1 <NA>
## 4 1004 1 <NA>
## 5 1005 1 <NA>
## 6 1006 1 <NA>
</code></pre><p>The <code>hhid</code> and <code>village</code> columns store household and village IDs.
The <code>caste</code> column stores caste memberships:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">count</span><span class="p">(</span><span class="n">households</span><span class="p">,</span> <span class="n">caste</span><span class="p">,</span> <span class="n">sort</span> <span class="o">=</span> <span class="bp">T</span><span class="p">)</span>
</code></pre></div><pre><code>## # A tibble: 6 × 2
## caste n
## <chr> <int>
## 1 OBC 5517
## 2 <NA> 4455
## 3 Scheduled Caste 2584
## 4 General 1371
## 5 Scheduled Tribe 618
## 6 Minority 359
</code></pre><p>Some <code>caste</code> values are missing because the surveys were changed during their collection.
About 53% of the households with known castes are in the <a href="https://en.wikipedia.org/wiki/Other_Backward_Class">Other Backward Class</a> (“OBC”).
This exceeds the (disputed) share of OBCs in India’s general population during the survey period.</p>
<p>The second table, <code>household_relationships</code>, contains information on inter-household relationships:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">head</span><span class="p">(</span><span class="n">household_relationships</span><span class="p">)</span>
</code></pre></div><pre><code>## # A tibble: 6 × 4
## hhid.x hhid.y village type
## <dbl> <dbl> <dbl> <fct>
## 1 1001 1002 1 Help with a decision
## 2 1001 1002 1 Borrow kerosene or rice from
## 3 1001 1002 1 Lend kerosene or rice to
## 4 1001 1002 1 Are related to
## 5 1001 1002 1 Invite to one's home
## 6 1001 1002 1 Visit in another's home
</code></pre><p>The <code>hhid.x</code> and <code>hhid.y</code> columns store ego and alter household IDs.
The <code>type</code> column stores relationship types:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">count</span><span class="p">(</span><span class="n">household_relationships</span><span class="p">,</span> <span class="n">type</span><span class="p">,</span> <span class="n">sort</span> <span class="o">=</span> <span class="bp">T</span><span class="p">)</span>
</code></pre></div><pre><code>## # A tibble: 12 × 2
## type n
## <fct> <int>
## 1 Visit in another's home 33629
## 2 Invite to one's home 32652
## 3 Engage socially with 30939
## 4 Borrow money from 25514
## 5 Lend kerosene or rice to 23993
## 6 Borrow kerosene or rice from 23743
## 7 Lend money to 23558
## 8 Obtain medical advice from 22310
## 9 Help with a decision 17228
## 10 Are related to 16037
## 11 Give advice to 15613
## 12 Go to temple with 2700
</code></pre><p>These types correspond to questions asked in Banerjee et al.‘s surveys.</p>
<h2 id="inter-household-networks">Inter-household networks</h2>
<p>We can use <code>households</code> and <code>household_relationships</code> to define social networks among the households in each village.
First, use the <code>graph_from_data_frame</code> function from <a href="https://igraph.org/">igraph</a> to create the network among all households:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">igraph</span><span class="p">)</span>
<span class="n">net</span> <span class="o">=</span> <span class="nf">graph_from_data_frame</span><span class="p">(</span>
<span class="nf">distinct</span><span class="p">(</span><span class="n">household_relationships</span><span class="p">,</span> <span class="n">hhid.x</span><span class="p">,</span> <span class="n">hhid.y</span><span class="p">),</span>
<span class="n">directed</span> <span class="o">=</span> <span class="bp">F</span><span class="p">,</span>
<span class="n">vertices</span> <span class="o">=</span> <span class="n">households</span>
<span class="p">)</span>
</code></pre></div><p><code>net</code> contains 66,862 edges: one for each pair of households with at least one social relationship.
There are no between-village relationships in the data, so we can partition <code>net</code> into village-specific networks without deleting any edges:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">purrr</span><span class="p">)</span>
<span class="n">villages</span> <span class="o">=</span> <span class="nf">sort</span><span class="p">(</span><span class="nf">unique</span><span class="p">(</span><span class="n">households</span><span class="o">$</span><span class="n">village</span><span class="p">))</span>
<span class="n">village_nets</span> <span class="o">=</span> <span class="nf">map</span><span class="p">(</span><span class="n">villages</span><span class="p">,</span> <span class="o">~</span><span class="nf">subgraph</span><span class="p">(</span><span class="n">net</span><span class="p">,</span> <span class="nf">V</span><span class="p">(</span><span class="n">net</span><span class="p">)</span><span class="o">$</span><span class="n">village</span> <span class="o">==</span> <span class="n">.)</span><span class="p">)</span>
<span class="nf">sum</span><span class="p">(</span><span class="nf">map_dbl</span><span class="p">(</span><span class="n">village_nets</span><span class="p">,</span> <span class="n">gsize</span><span class="p">))</span> <span class="c1"># Same as gsize(net)</span>
</code></pre></div><pre><code>## [1] 66862
</code></pre><p>The networks in <code>village_nets</code> are too large to describe visually.
Instead, let’s compute some of their properties:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">village_nets_properties</span> <span class="o">=</span> <span class="nf">map_df</span><span class="p">(</span><span class="n">village_nets</span><span class="p">,</span> <span class="o">~</span><span class="p">{</span>
<span class="n">comp</span> <span class="o">=</span> <span class="nf">components</span><span class="p">(</span><span class="n">.)</span>
<span class="n">giant</span> <span class="o">=</span> <span class="nf">subgraph</span><span class="p">(</span><span class="n">.,</span> <span class="n">comp</span><span class="o">$</span><span class="n">membership</span> <span class="o">==</span> <span class="nf">which.max</span><span class="p">(</span><span class="n">comp</span><span class="o">$</span><span class="n">csize</span><span class="p">))</span>
<span class="nf">tibble</span><span class="p">(</span>
<span class="n">Households</span> <span class="o">=</span> <span class="nf">gorder</span><span class="p">(</span><span class="n">.)</span><span class="p">,</span>
<span class="n">`Mean degree`</span> <span class="o">=</span> <span class="nf">mean</span><span class="p">(</span><span class="nf">degree</span><span class="p">(</span><span class="n">.)</span><span class="p">),</span>
<span class="n">`% of households in giant`</span> <span class="o">=</span> <span class="m">100</span> <span class="o">*</span> <span class="nf">gorder</span><span class="p">(</span><span class="n">giant</span><span class="p">)</span> <span class="o">/</span> <span class="nf">gorder</span><span class="p">(</span><span class="n">.)</span><span class="p">,</span>
<span class="n">`Mean distance in giant`</span> <span class="o">=</span> <span class="nf">mean_distance</span><span class="p">(</span><span class="n">giant</span><span class="p">)</span>
<span class="p">)</span>
<span class="p">})</span>
</code></pre></div><p>I summarize these properties in the table below.
The number of households in each village ranges from 77 to 356.
The mean degree of the households in each village ranges from 6.11 to 13.44.
Most households are in the giant component for their village, and are connected to others in that component via paths of length two or three.</p>
<table>
<thead>
<tr>
<th align="left">Property</th>
<th align="right">Mean</th>
<th align="right">Std. dev.</th>
<th align="right">Min.</th>
<th align="right">Median</th>
<th align="right">Max.</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Households</td>
<td align="right">198.72</td>
<td align="right">59.29</td>
<td align="right">77.00</td>
<td align="right">190.00</td>
<td align="right">356.00</td>
</tr>
<tr>
<td align="left">Mean degree</td>
<td align="right">8.90</td>
<td align="right">1.61</td>
<td align="right">6.11</td>
<td align="right">8.72</td>
<td align="right">13.44</td>
</tr>
<tr>
<td align="left">% of households in giant</td>
<td align="right">95.10</td>
<td align="right">2.71</td>
<td align="right">84.62</td>
<td align="right">95.54</td>
<td align="right">99.42</td>
</tr>
<tr>
<td align="left">Mean distance in giant</td>
<td align="right">2.75</td>
<td align="right">0.21</td>
<td align="right">2.30</td>
<td align="right">2.72</td>
<td align="right">3.32</td>
</tr>
</tbody>
</table>
<h2 id="inter-caste-mixing">Inter-caste mixing</h2>
<p>We can use <code>net</code> to study the extent of <a href="https://bldavies.com/blog/assortative-mixing/">assortative mixing</a> with respect to caste membership.
First, delete the 4,455 households with missing <code>caste</code> values:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">subnet</span> <span class="o">=</span> <span class="nf">subgraph</span><span class="p">(</span><span class="n">net</span><span class="p">,</span> <span class="o">!</span><span class="nf">is.na</span><span class="p">(</span><span class="nf">V</span><span class="p">(</span><span class="n">net</span><span class="p">)</span><span class="o">$</span><span class="n">caste</span><span class="p">))</span>
</code></pre></div><p><code>subnet</code> contains 10,449 households with a mean degree of 9.08.
This is similar to the mean degree in <code>net</code>.
The two networks also have similar mean distances between connected households: 2.85 in <code>subnet</code>, versus 2.81 in <code>net</code>.</p>
<p>Next, compute <code>subnet</code>'s mixing matrix:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">bldr</span><span class="p">)</span> <span class="c1"># https://github.com/bldavies/bldr</span>
<span class="n">mix_mat</span> <span class="o">=</span> <span class="nf">get_mixing_matrix</span><span class="p">(</span><span class="n">subnet</span><span class="p">,</span> <span class="s">'caste'</span><span class="p">)</span>
</code></pre></div><p>I define <code>get_mixing_matrix</code> <a href="https://github.com/bldavies/bldr/blob/master/R/get_mixing_matrix.R">here</a>.
It returns a matrix in which rows and columns correspond to castes, and entries equal the share of edges joining households in each caste pair.
Multiplying these entries by the sum of degrees—which, by the <a href="https://en.wikipedia.org/wiki/Handshaking_lemma">degree sum formula</a>, equals twice the number of edges—yields a table of inter-caste edge counts:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">mix_mat</span> <span class="o">*</span> <span class="p">(</span><span class="m">2</span> <span class="o">*</span> <span class="nf">gsize</span><span class="p">(</span><span class="n">subnet</span><span class="p">))</span>
</code></pre></div><pre><code>##
## General Minority OBC Sch. Caste Sch. Tribe
## General 8680 79 3118 932 521
## Minority 79 1860 381 156 84
## OBC 3118 381 40058 4325 2241
## Sch. Caste 932 156 4325 16074 910
## Sch. Tribe 521 84 2241 910 2722
</code></pre><p>For example, <code>subnet</code> contains 3,118 edges between households in general castes and households in OBC castes.</p>
<p>We can measure the extent of assortative mixing by comparing <code>mix_mat</code> to the matrix we’d expect if edges were independent of caste.
This matrix equals the outer product of the row and column sums of <code>mix_mat</code>:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">mix_mat_indep</span> <span class="o">=</span> <span class="nf">rowSums</span><span class="p">(</span><span class="n">mix_mat</span><span class="p">)</span> <span class="o">%*%</span> <span class="nf">t</span><span class="p">(</span><span class="nf">colSums</span><span class="p">(</span><span class="n">mix_mat</span><span class="p">))</span>
</code></pre></div><p>Comparing the traces of <code>mix_mat</code> and <code>mix_mat_indep</code> allows us to measure mixing overall:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">tr</span> <span class="o">=</span> <span class="nf">function</span><span class="p">(</span><span class="n">m</span><span class="p">)</span> <span class="nf">sum</span><span class="p">(</span><span class="nf">diag</span><span class="p">(</span><span class="n">m</span><span class="p">))</span>
<span class="nf">c</span><span class="p">(</span><span class="nf">tr</span><span class="p">(</span><span class="n">mix_mat</span><span class="p">),</span> <span class="nf">tr</span><span class="p">(</span><span class="n">mix_mat_indep</span><span class="p">))</span>
</code></pre></div><pre><code>## [1] 0.7313254 0.3598672
</code></pre><p>So <code>subnet</code> contains about twice as many within-caste edges than we’d expect if edges were independent of caste.</p>
<p>We can also compare <code>mix_mat</code> and <code>mix_mat_indep</code> element-wise to assess which inter-caste relationships are most over-represented:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">round</span><span class="p">(</span><span class="n">mix_mat</span> <span class="o">/</span> <span class="n">mix_mat_indep</span><span class="p">,</span> <span class="m">2</span><span class="p">)</span>
</code></pre></div><pre><code>##
## General Minority OBC Sch. Caste Sch. Tribe
## General 4.64 0.22 0.44 0.30 0.57
## Minority 0.22 26.93 0.28 0.26 0.48
## OBC 0.44 0.28 1.51 0.37 0.65
## Sch. Caste 0.30 0.26 0.37 3.04 0.60
## Sch. Tribe 0.57 0.48 0.65 0.60 6.15
</code></pre><p>So, for example, there are about 51% more OBC-OBC edges than we’d expect if edges were independent of caste, but less than half as many general-OBC edges.</p>
Optimal pacing with random energy costs
https://bldavies.com/blog/optimal-pacing-random-energy-costs/
Mon, 12 Dec 2022 00:00:00 +0000https://bldavies.com/blog/optimal-pacing-random-energy-costs/<p><a href="https://bldavies.com/blog/optimal-pacing-varying-energy-costs/">My previous post</a> discussed how I should pace myself in a running race.
I allowed the cost of running fast to vary during the race, and I showed how the costs I faced determined my optimal speeds and finish time.
I assumed that I knew the costs in advance—for example, that I knew which parts of the race had the steepest hills and the strongest headwinds.</p>
<p>But sometimes I <em>don’t</em> know the costs of running fast in advance.
For example, I might not know the terrain or how the weather will turn out.
This uncertainty prevents me from committing to a pacing strategy before the race begins.
Instead, I must adapt my strategy to the costs I encounter during the race.</p>
<p>This post discusses my optimal pacing strategy when I face random energy costs.
I assume these costs follow a <a href="https://en.wikipedia.org/wiki/Markov_chain">Markov chain</a>.
This allows me to <a href="#solving-the-problem">solve</a> for my optimal speeds and finish times numerically.
I show that my <a href="#ex-ante-expected-times"><em>ex ante</em> expected time</a> falls when my costs become more variable and less persistent.
I also show that my <a href="#realized-times">realized time</a> depends on the number and timing of high cost realizations.</p>
<h2 id="allowing-for-random-costs">Allowing for random costs</h2>
<p>The setup is similar to <a href="https://bldavies.com/blog/optimal-pacing-varying-energy-costs/">my previous post</a>:
I have <code>\(k>0\)</code> units of energy to allocate across <code>\(N\)</code> laps <code>\(n\in\{1,2,\ldots,N\}\)</code>.
It costs <code>\(c_ns_n\)</code> units to run at speed <code>\(s_n\)</code> in lap <code>\(n\)</code>, where the per-unit cost <code>\(c_n>0\)</code> varies with <code>\(n\)</code>.
I want to minimize my total time
<code>$$\DeclareMathOperator{\E}{E} \DeclareMathOperator{\Var}{Var} \renewcommand{\epsilon}{\varepsilon} T\equiv\sum_{n=1}^N\frac{1}{s_n}$$</code>
subject to the dynamic energy constraint
<code>$$k_{n+1}=k_n-c_ns_n,$$</code>
boundary conditions <code>\(k_1=k\)</code> and <code>\(k_{N+1}=0\)</code>, and non-negativity constraint <code>\(s_n\ge0\)</code>.
But now the costs <code>\(c_n\)</code> are random.
So I can’t choose the entire speed sequence <code>\((s_n)_{n=1}^N\)</code> before the race begins.
Instead, I choose each term <code>\(s_n\)</code> after observing the cost history <code>\(c_1,c_2,\ldots,c_n\)</code>.</p>
<p>For simplicity, I assume <code>\(c_n\in\{1-\epsilon,1+\epsilon\}\)</code> for some <code>\(\epsilon\in[0,1)\)</code>, and that <code>\(\Pr(c_1=1+\epsilon)=0.5\)</code> and <code>\(\Pr(c_{n+1}=c_n)=p\)</code> for each <code>\(n\)</code>.
The probability <code>\(p\)</code> controls costs’ persistence: if <code>\(p=1\)</code> then they never change, whereas if <code>\(p=0\)</code> then they change every lap.
The cost in lap <code>\(n+1\le N\)</code> has conditional mean
<code>$$\begin{align} \E_n[c_{n+1}] &\equiv \E[c_{n+1}\mid c_1,c_2,\ldots,c_n] \\ &= \begin{cases} 1+(2p-1)\epsilon & \text{if}\ c_n=1+\epsilon \\ 1-(2p-1)\epsilon & \text{if}\ c_n=1-\epsilon \end{cases} \end{align}$$</code>
and variance
<code>$$\begin{align} \Var_n(c_{n+1}) &= \E_n[c_{n+1}^2]-\E_n[c_{n+1}]^2 \\ &= 4p(1-p)\epsilon^2, \end{align}$$</code>
where <code>\(\E_n\)</code> takes expectations given the first <code>\(n\)</code> cost realizations, and where <code>\(\epsilon\)</code> controls the variance of <code>\(c_{n+1}\)</code>.
For example, if <code>\(p=0.5\)</code> then costs are independent across laps, and so <code>\(\E_n[c_{n+1}]=1\)</code> and <code>\(\Var_n(c_{n+1})=\epsilon^2\)</code> for each <code>\(n\)</code>.
But as <code>\(p\)</code> moves away from 0.5, knowing the cost history <code>\(c_1,c_2,\ldots,c_n\)</code> gives me more information about <code>\(c_{n+1}\)</code>, thereby decreasing <code>\(\Var_n(c_{n+1})\)</code>.</p>
<h2 id="solving-the-problem">Solving the problem</h2>
<p>Facing random costs forces me to solve my pacing problem sequentially: to choose each speed <code>\(s_n\)</code> based on the observed cost history <code>\(c_1,c_2,\ldots,c_n\)</code> and distribution of future costs <code>\(c_{n+1},c_{n+2},\ldots,c_N\)</code>.
This is equivalent to choosing the amount of energy <code>\(k_{n+1}\)</code> to carry into the next lap.
I make this choice via the <a href="https://en.wikipedia.org/wiki/Bellman_equation">Bellman equation</a>
<code>$$V_n=\min_{k_{n+1}}\left\{\frac{c_n}{k_n-k_{n+1}}+\E_n[V_{n+1}]\right\},$$</code>
where
<code>$$V_n\equiv\sum_{m=n}^N\frac{1}{s_m}$$</code>
is the time taken to run laps <code>\(n\)</code> through <code>\(N\)</code>.
It turns out that
<code>$$V_n=\frac{a_n}{k_n}$$</code>
for each <code>\(n\in\{1,2,\ldots,N+1\}\)</code>, where the coefficients <code>\(a_1,a_2,\ldots,a_{N+1}\)</code> are defined recursively by
<code>$$\begin{align} a_{N+1} &= 0 \\ a_n &= \left(\sqrt{c_n}+\sqrt{\E_n[a_{n+1}]}\right)^2. \end{align}$$</code>
If the costs <code>\(c_n\)</code> are non-random then the coefficients <code>\(a_n\)</code> are also non-random, and we obtain the solution described in <a href="https://bldavies.com/blog/optimal-pacing-varying-energy-costs/">my previous post</a>.
But if the costs are random then so are the <code>\(a_n\)</code>, and calculating them involves a case-wise analysis that grows exponentially with <code>\(N\)</code>.
Instead, I proceed numerically: by computing <code>\(a_n\)</code> in each cost state <code>\(c_n\)</code> given the implied distribution of future states.
This is possible because the cost sequence is a Markov chain, which means that <code>\(c_n\)</code> is a <a href="https://en.wikipedia.org/wiki/Sufficient_statistic">sufficient statistic</a> for the future costs <code>\(c_{n+1},c_{n+2},\ldots,c_N\)</code>.
I use this property to compute the optimal speeds
<code>$$s_n=\frac{k_n}{c_n+\sqrt{c_n\E_n[a_{n+1}]}}$$</code>
and finish time <code>\(T\)</code> associated with each cost sequence realization.</p>
<h2 id="ex-ante-expected-times"><em>Ex ante</em> expected times</h2>
<p>Consider the case with full cost persistence: <code>\(p=1\)</code>.
Then <code>\(c_n=c_1\)</code> for each <code>\(n\)</code>, from which it follows that <code>\(T=N^2c_1/k\)</code>.
But <code>\(c_1\)</code> has mean <code>\(\E[c_1]=1\)</code>, so my <em>ex ante</em> expected time with <code>\(p=1\)</code> equals
<code>$$\E[T\mid p=1]=\frac{N^2}{k}.$$</code>
This is the finish time I expect if I know costs are constant but don’t know if they’re high or low.
Conversely, if I know costs always alternate (i.e., that <code>\(p=0\)</code>), then my <em>ex ante</em> expected time equals
<code>$$\E[T\mid p=0]=\frac{N^2\E[\sqrt{c_1}]^2}{k}+\begin{cases} 0 & \text{if}\ N\ \text{is even} \\ \Var(\sqrt{c_1})/k & \text{if}\ N\ \text{is odd}. \end{cases}$$</code>
The additional <code>\(\Var(\sqrt{c_1})\)</code> term when <code>\(N\)</code> is odd comes from the cost sequence being imbalanced: it has <code>\((N-1)/2+1\)</code> copies of <code>\(c_1\)</code> but only <code>\((N-1)/2\)</code> copies of the other cost value.
This imbalance becomes inconsequential as <code>\(N\)</code> becomes large.
Thus
<code>$$\begin{align} \E[T\mid p=1]-\E[T\mid p=0] &\approx \frac{N^2}{k}\left(1-\E[\sqrt{c_1}]^2\right) \\ &= \frac{N^2}{2k}\left(1-\sqrt{1-\epsilon^2}\right), \end{align}$$</code>
which grows with <code>\(\epsilon\)</code>.
We can understand this growth via the following chart.
It shows how my expected time increases from <code>\(\E[T\mid p=0]\)</code> to <code>\(\E[T\mid p=1]\)</code> as <code>\(p\)</code> increases from zero to one.
Intuitively, if <code>\(p\)</code> is large then I could face persistently high costs that slow me down.
But if <code>\(p\)</code> is small then high costs are likely to be “cancelled out” by low costs, improving my optimal time.
The benefit of this canceling grows as the difference <code>\(2\epsilon\)</code> between high and low costs grows.</p>
<p><img src="figures/expected-times-1.svg" alt=""></p>
<h2 id="realized-times">Realized times</h2>
<p>The relationship between my actual and expected times depends on the realized cost sequence.
I demonstrate this dependence in the table below.
It shows the mean (standard deviation) of my actual and expected times across 100 simulated 100-lap races with 25, 50, and 75 high-cost laps.
It also shows the “oracle” time I would obtain if I knew the cost sequence in advance.
This time depends only on the number of high-cost laps, whereas my actual time depends on both the number and order of such laps.
Likewise, my expected time depends only on my parameter choices: <code>\(N=100\)</code>, <code>\(k=100\)</code>, <code>\(\epsilon=0.5\)</code>, and <code>\(p=0.5\)</code>.
These parameters are constant across simulated races, so my expected time is also constant.</p>
<table>
<thead>
<tr>
<th align="center">High-cost laps</th>
<th align="center">Actual time</th>
<th align="center">Expected time</th>
<th align="center">Oracle time</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center">25</td>
<td align="center">71.38 (0.37)</td>
<td align="center">93.65</td>
<td align="center">69.98</td>
</tr>
<tr>
<td align="center">50</td>
<td align="center">93.50 (0.14)</td>
<td align="center">93.65</td>
<td align="center">93.30</td>
</tr>
<tr>
<td align="center">75</td>
<td align="center">122.00 (0.58)</td>
<td align="center">93.65</td>
<td align="center">119.98</td>
</tr>
</tbody>
</table>
<p>I finish faster than expected when I face 25 high-cost laps, which is unexpectedly few.
Whereas I finish slower than expected when I face 75 high-cost laps, which is unexpectedly many.
I always finish slower than the oracle time because I have to optimize my speeds sequentially, whereas the oracle has the option to optimize them all at once.</p>
<p>The difference between my actual and oracle times depends on the order in which I encounter high- and low-cost laps.
For example, consider the following four orderings:</p>
<ol>
<li>50 high-cost laps followed by 50 low-cost laps;</li>
<li>50 low-cost laps followed by 50 high-cost laps;</li>
<li>25 low-cost laps followed by 25 high-cost laps, repeated twice;</li>
<li>10 low-cost laps followed by 10 high-cost laps, repeated five times.</li>
</ol>
<p>I assume the same parameters <code>\((N,k,\epsilon,p)\)</code> as in the simulations above, so my <em>ex ante</em> expected time equals 93.65 in all four orderings.
Likewise, my oracle time equals 93.30 in all orderings because they all contain 50 high-cost laps and 50 low-cost laps.
This oracle time comes from choosing speeds before each race begins, whereas my actual time comes from choosing speeds when I start each lap.
I compare these choices in the chart below.</p>
<p><img src="figures/realized-times-1.svg" alt=""></p>
<p>Consider the first ordering, with 50 high-cost laps followed by 50 low-cost laps.
I start at about the same speed as the oracle.
But I slow down in laps two through 50 to preserve my energy, which is unexpectedly expensive.
I speed up in lap 51 when energy becomes cheap, then keep speeding up as energy keeps being unexpectedly cheap.
I sprint the last few laps to use the excess energy I saved from running slow earlier.
Whereas the oracle never has excess energy: it always uses the optimal amount, maintaining a constant speed in each block of constant-cost laps.
This makes the oracle time 3.6% faster than my actual time.</p>
<p>Now consider the second ordering, with 50 low-cost laps followed by 50 high-cost laps.
Again, I start at about the same speed as the oracle.
But now I <em>speed up</em> in laps two through 50 because energy is unexpectedly <em>cheap</em>.
I slow down in lap 51 when energy becomes expensive, then keep slowing down as energy keeps being unexpectedly expensive.
I <a href="https://en.wikipedia.org/wiki/Hitting_the_wall">bonk</a> in the last few laps, having used too much energy by running too fast in the first half.
Whereas the oracle never bonks.
It finishes 4.3% faster than me.</p>
<p>Having shorter blocks of constant-cost laps narrows the gap between my actual and oracle times.
This is because short blocks prevent me from straying too far from the oracle’s energy consumption path.
Intuitively, the more frequently I encounter different costs, the more these costs meet my expectations, and so the less I respond to costs being unexpectedly expensive or cheap.
Indeed, my actual time approaches the oracle time as the blocks of constant-cost laps approach one-lap lengths.
This echoes <a href="#ex-ante-expected-times">my earlier discussion</a> of <em>ex ante</em> expected times: I finish faster when costs are less persistent.</p>
Optimal pacing with varying energy costs
https://bldavies.com/blog/optimal-pacing-varying-energy-costs/
Sun, 11 Dec 2022 00:00:00 +0000https://bldavies.com/blog/optimal-pacing-varying-energy-costs/<p>Suppose I’m running a race.
I have a fixed amount of energy to “spend” on running fast.
But the energy cost of running fast varies during the race (e.g., it’s high on hills and low on flats).
How should I pace myself to minimize my race time?</p>
<p>This post discusses my optimal pacing problem.
I describe it mathematically, derive its solution in <a href="#solving-the-two-lap-case">simple</a> and <a href="#solving-the-general-case">general</a> settings, and analyze these solutions’ <a href="#solution-properties">properties</a>.
I assume energy costs are deterministic, whereas <a href="https://bldavies.com/blog/optimal-pacing-random-energy-costs/">my next post</a> allows them to be random.</p>
<h2 id="the-optimal-pacing-problem">The optimal pacing problem</h2>
<p>My race consists of <code>\(N\)</code> “laps” <code>\(n\in\{1,2,\ldots,N\}\)</code> with equal lengths.
I start with <code>\(k_1=k>0\)</code> units of energy and finish the race with none.
Running lap <code>\(n\)</code> at speed <code>\(s_n\)</code> costs <code>\(c_ns_n\)</code> units of energy, where <code>\(c_n>0\)</code> varies with <code>\(n\)</code>.</p>
<p>My goal is to find the speed sequence <code>\((s_n)_{n=1}^N\)</code> that minimizes my total time<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>
<code>$$\DeclareMathOperator{\E}{E} \DeclareMathOperator{\Var}{Var} \newcommand{\der}{\mathrm{d}} \newcommand{\parfrac}[2]{\frac{\partial #1}{\partial #2}} T\equiv\sum_{n=1}^N\frac{1}{s_n}$$</code>
subject to the dynamic energy constraint
<code>$$k_{n+1}=k_n-c_ns_n,$$</code>
boundary conditions <code>\(k_1=k\)</code> and <code>\(k_{N+1}=0\)</code>, and non-negativity constraint <code>\(s_n\ge0\)</code>.</p>
<h2 id="solving-the-two-lap-case">Solving the two-lap case</h2>
<p>We can build intuition by solving the case with <code>\(N=2\)</code>.
Then the dynamic constraint and boundary conditions imply
<code>$$T=\frac{c_1}{k-k_2}+\frac{c_2}{k_2},$$</code>
where <code>\(k_2\)</code> is the energy I choose to leave for the second lap.
It satisfies the first-order condition <code>\(\partial T/\partial k_2=0\)</code>, which we can write as
<code>$$\frac{c_1}{(k-k_2)^2}=\frac{c_2}{k_2^2}.$$</code>
The left-hand side is the marginal cost (in units of total time) of using less energy in the first lap.
The right-hand side is the marginal benefit of using more energy in the second lap.
The first-order condition balances this marginal cost and benefit.
It determines how I should <a href="https://en.wikipedia.org/wiki/Consumption_smoothing">smooth my energy consumption</a> across laps.</p>
<p>Rearranging the first-order condition for <code>\(k_2\)</code> gives
<code>$$k_2=\frac{\sqrt{c_2}}{\sqrt{c_1}+\sqrt{c_2}}k.$$</code>
So I should spend my energy proportionally to the square roots of the costs I face.
For example, if <code>\(c_1=4c_2\)</code> then I should spend a third of my energy on the first lap and two thirds on the second.
This leads me to run twice as fast on the second lap and makes my total time equal <code>\(9c_2/k\)</code>.
In contrast, if I spent energy proportionally to costs then I would spend a fifth on the first lap and four fifths on the second.
I would run at a constant speed and my total time would equal <code>\(10c_2/k\)</code>.
That strategy would be optimal if the costs were constant at their mean <code>\(5c_2/2\)</code>.
But they <em>aren’t</em> constant: they vary by a factor of four.
Square-root scaling takes advantage of this variation.
It makes me run slow when it’s expensive to run fast.</p>
<h2 id="solving-the-general-case">Solving the general case</h2>
<p>The results and intuitions from the case with <code>\(N=2\)</code> generalize to cases with <code>\(N>2\)</code>.
But those cases require more powerful solution methods.
I explain two: using the <a href="https://en.wikipedia.org/wiki/Hamiltonian_%28control_theory%29">Hamiltonian</a> and using the <a href="https://en.wikipedia.org/wiki/Bellman_equation">Bellman equation</a>.
The first is faster, but the second is more intuitive and extends naturally to a setting with <a href="https://bldavies.com/blog/optimal-pacing-random-energy-costs/">random costs</a>.</p>
<h3 id="using-the-hamiltonian">Using the Hamiltonian</h3>
<p>The Hamiltonian for my optimal pacing problem is
<code>$$H\equiv-\frac{1}{s_n}-\lambda_{n+1}c_ns_n,$$</code>
where <code>\(\lambda_{n+1}\)</code> is a costate that satisfies
<code>$$\lambda_{n+1}-\lambda_n=-\parfrac{H}{k_n}$$</code>
for each <code>\(n\)</code>.
But <code>\(\partial H/\partial k_n=0\)</code> and so <code>\(\lambda_{n+1}=\lambda\)</code> is constant.
Substituting it into the first-order condition <code>\(\partial H/\partial s_n=0\)</code> gives
<code>$$s_n=\frac{1}{\sqrt{\lambda c_n}}.$$</code>
Now the dynamic constraint and boundary conditions imply
<code>$$\sum_{n=1}^Nc_ns_n=k,$$</code>
from which it follows that
<code>$$\sqrt\lambda=\frac{1}{k}\sum_{n=1}^N\sqrt{c_n}$$</code>
and therefore
<code>$$s_n=\frac{k}{\sqrt{c_n}\sum_{m=1}^N\sqrt{c_m}}$$</code>
for each <code>\(n\)</code>.
Then my total time equals
<code>$$T=\frac{1}{k}\left(\sum_{n=1}^N\sqrt{c_n}\right)^2.$$</code>
For example, letting <code>\(N=2\)</code> and <code>\(c_1=4c_2\)</code> yields the optimal time <code>\(T=9c_2/k\)</code> <a href="#solving-the-two-lap-case">described above</a>.</p>
<h3 id="using-the-bellman-equation">Using the Bellman equation</h3>
<p>The dynamic constraint implies
<code>$$s_n=\frac{k_n-k_{n+1}}{c_n}$$</code>
for each <code>\(n.\)</code>
Consequently, the cost sequence <code>\((c_n)_{n=1}^N\)</code> and “remaining energy” sequence <code>\((k_{n+1})_{n=0}^N\)</code> uniquely determine the speed sequence <code>\((s_n)_{n=1}^N\)</code>.
So if
<code>$$V_n\equiv\sum_{m=n}^N\frac{1}{s_n}$$</code>
denotes the time spent running laps <code>\(n\)</code> through <code>\(N\)</code> when I pace myself optimally, then <code>\(V_n\)</code> must satisfy the Bellman equation
<code>$$V_n=\min_{k_{n+1}}\left\{\frac{c_n}{k_n-k_{n+1}}+V_{n+1}\right\}.$$</code>
This equation echoes my objective in <a href="#solving-the-two-lap-case">the two-lap case</a>.
Intuitively, my optimal speeds in the <code>\(N\)</code>-lap case solve a sequence of two-lap problems, where the second “lap” is the remainder of my race.</p>
<p>We can solve the Bellman equation using the <a href="https://en.wikipedia.org/wiki/Method_of_undetermined_coefficients">method of undetermined coefficients</a>.
Suppose <code>\(V_{n+1}=a_{n+1}/k_{n+1}\)</code> for some <code>\(a_{n+1}\ge0\)</code>.
Then, under optimal pacing, we have
<code>$$\begin{align} 0 &= \parfrac{}{k_{n+1}}\left(\frac{c_n}{k_n-k_{n+1}}+V_{n+1}\right) \\ &= \frac{c_n}{(k_n-k_{n+1})^2}-\frac{a_{n+1}}{k_{n+1}} \end{align}$$</code>
and therefore
<code>$$k_{n+1}=\frac{\sqrt{a_{n+1}}}{\sqrt{a_{n+1}}+\sqrt{c_n}}k_n.$$</code>
Substituting this recurrence into the Bellman equation gives <code>\(V_n=a_n/k_n\)</code>, where
<code>$$a_n\equiv\left(\sqrt{a_{n+1}}+\sqrt{c_n}\right)^2$$</code>
and <code>\(a_{N+1}=0\)</code>.
Solving recursively gives
<code>$$\sqrt{a_n}=\sum_{m=n}^N\sqrt{c_m}$$</code>
for each <code>\(n\)</code>, from which it follows that
<code>$$k_{n+1}=\frac{\sum_{m=n+1}^N\sqrt{c_m}}{\sum_{m=1}^N\sqrt{c_m}}k$$</code>
and
<code>$$s_n=\frac{k}{\sqrt{c_n}\sum_{m=1}^N\sqrt{c_m}}.$$</code>
So we get the same optimal speed sequence and total time as obtained <a href="#using-the-hamiltonian">using the Hamiltonian</a>.
We also see the square-root scaling from the two-lap case generalize to the <code>\(N\)</code>-lap case.
For example, if the costs I face in the first half of the race are four times the costs I face in the second, then I should run half as fast in the first half than I run in the second.</p>
<h2 id="solution-properties">Solution properties</h2>
<p>As explained above, each speed term <code>\(s_n\)</code> scales with the inverse square-root of the corresponding cost term <code>\(c_n\)</code>.
This scaling takes advantage of the variation in costs faced during my race.
But scaling <em>all</em> of the cost terms has a linear effect: doubling each <code>\(c_n\)</code> halves each <code>\(s_n\)</code> and so doubles my total time <code>\(T\)</code>.
Likewise, doubling my initial energy <code>\(k\)</code> doubles each <code>\(s_n\)</code> and so halves <code>\(T\)</code>.
These linearities come from the linearity of the dynamic constraint <code>\(k_{n+1}=k_n-c_ns_n\)</code>.</p>
<p>Rearranging the cost sequence <code>\((c_n)_{n=1}^N\)</code> leads to the same rearrangement of the optimal speed sequence <code>\((s_n)_{n=1}^N\)</code>.
This is because the sequences satisfy
<code>$$\sqrt{c_n}s_n=\frac{k}{\sum_{m=1}^N\sqrt{c_m}},$$</code>
the right-hand side of which doesn’t change if I rearrange the <code>\(c_n\)</code>.
Nor does my minimized time <code>\(T\)</code> change.
Intuitively, swapping the laps on which I run slow and fast doesn’t change my average pace.</p>
<p>Whereas variation in costs <em>improves</em> my average pace.
To see how, let
<code>$$\E[c_n]\equiv\frac{1}{N}\sum_{n=1}^Nc_n$$</code>
be the empirical mean cost of energy during my race and let
<code>$$\overline{T}\equiv\frac{\E[c_n]N}{k}$$</code>
be my optimal time when <code>\(c_n=\E[c_n]\)</code> for each <code>\(n\)</code>.
Then
<code>$$\begin{align} \overline{T}-T &= \frac{N}{k}\left(\E[c_n]-\frac{1}{N^2}\left(\sum_{n=1}^N\sqrt{c_n}\right)^2\right) \\ &= \frac{N}{k}\left(\E[\sqrt{c_n}^2]-\E[\sqrt{c_n}]^2\right) \end{align}$$</code>
and therefore
<code>$$T=\overline{T}-\frac{N}{k}\Var(\sqrt{c_n}),$$</code>
where <code>\(\Var(\sqrt{c_n})=\E[\sqrt{c_n}^2]-\E[\sqrt{c_n}]^2\)</code> is the empirical variance of the <code>\(\sqrt{c_n}\)</code>.
So applying a <a href="https://en.wikipedia.org/wiki/Mean-preserving_spread">mean-preserving spread</a> to the distribution of <code>\(\sqrt{c_n}\)</code> values lowers my optimal time <code>\(T\)</code>.
But this is not the same as increasing the variance in <code>\(c_n\)</code>.
For example, consider the cost sequences <code>\((c_n)_{n=1}^{100}\)</code> and <code>\((c_n')_{n=1}^{100}\)</code> defined by
<code>$$c_n=\begin{cases} 145 & \text{if}\ n\le 50 \\ 55 & \text{otherwise} \end{cases}$$</code>
and
<code>$$c_n'=\begin{cases} 200 & \text{if}\ n\le 20 \\ 75 & \text{otherwise}. \end{cases}$$</code>
Then <code>\(\E[c_n]=\E[c_n']=100\)</code>, while <code>\(\Var(c_n)=2025\)</code> and <code>\(\Var(c_n')=2500\)</code>.
So the <code>\(c_n\)</code> have lower variance than the <code>\(c_n'\)</code>.
But <code>\(\Var(\sqrt{c_n})\approx5.3\)</code> is larger than <code>\(\Var(\sqrt{c_n'})\approx4.8\)</code>, which means my optimal time is <em>smaller</em> under <code>\((c_n)_{n=1}^{100}\)</code> than under <code>\((c_n')_{n=1}^{100}\)</code>.
Intuitively, I <a href="https://bldavies.com/blog/binary-distributions-risky-gambles">prefer</a> cost sequences with a mix of highs and lows to sequences with a few sharp highs and lots of mild lows.</p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>Replacing <code>\(T\)</code> with <code>\(T/N\)</code>, and letting <code>\(x=n/N\)</code> and <code>\(N\to\infty\)</code>, yields a special (linear) case of the problem discussed in <a href="https://bldavies.com/blog/rationalizing-negative-splits/">my post on negative splits</a>. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Correlation and concatenation
https://bldavies.com/blog/correlation-concatenation/
Thu, 17 Nov 2022 00:00:00 +0000https://bldavies.com/blog/correlation-concatenation/<p>Suppose I have data <code>\((a_i,b_i)_{i=1}^n\)</code> on two random variables <code>\(A\)</code> and <code>\(B\)</code>.
I store my data as vectors <code>a</code> and <code>b</code>, and compute their correlation using the <code>cor</code> function in R:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">cor</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">)</span>
</code></pre></div><pre><code>## [1] 0.4326075
</code></pre><p>Now suppose I append a mirrored version of my data by defining the vectors</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">alpha</span> <span class="o">=</span> <span class="nf">c</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">)</span>
<span class="n">beta</span> <span class="o">=</span> <span class="nf">c</span><span class="p">(</span><span class="n">b</span><span class="p">,</span> <span class="n">a</span><span class="p">)</span>
</code></pre></div><p>so that <code>alpha</code> is a concatenation of the <code>\(a_i\)</code> and <code>\(b_i\)</code> values, and <code>beta</code> is a concatenation of the <code>\(b_i\)</code> and <code>\(a_i\)</code> values.
I compute the correlation of <code>alpha</code> and <code>before</code> as before:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">cor</span><span class="p">(</span><span class="n">alpha</span><span class="p">,</span> <span class="n">beta</span><span class="p">)</span>
</code></pre></div><pre><code>## [1] 0.4288428
</code></pre><p>Notice that <code>cor(a, b)</code> and <code>cor(alpha, beta)</code> are not equal.
This surprised me.
How can appending a copy of <em>the same data</em> change the correlation within those data?</p>
<p>The answer is that the concatenated data <code>\((\alpha_i,\beta_i)_{i=1}^{2n}\)</code> have different marginal distributions than the original data <code>\((a_i,b_i)_{i=1}^n\)</code>.
Indeed one can show that
<code>$$\DeclareMathOperator{\Cor}{Cor} \DeclareMathOperator{\Cov}{Cov} \DeclareMathOperator{\E}{E} \DeclareMathOperator{\Var}{Var} \begin{align} \E[\alpha]=\E[\beta]=\frac{\E[a]+\E[b]}{2} \end{align}$$</code>
and
<code>$$\begin{align} \E[\alpha^2]=\E[\beta^2]=\frac{\E[a^2]+\E[b^2]}{2}, \end{align}$$</code>
where
<code>$$\E[\alpha]\equiv\frac{1}{2n}\sum_{i=1}^n\alpha_i$$</code>
is the empirical mean of the <code>\(\alpha_i\)</code> values, and where <code>\(\E[\beta]\)</code>, <code>\(\E[a]\)</code>, and <code>\(\E[b]\)</code> are defined similarly.
It turns out that <code>\(\E[\alpha\beta]=\E[ab]\)</code>, but since the marginal distributions are different the empirical correlations are different.
In fact
<code>$$\Cor(\alpha,\beta)=\frac{\Cov(a,b)-0.25\left(\E[a]+\E[b]\right)^2}{0.5\Var(a)+0.5\Var(b)+0.25\left(\E[a]-\E[b]\right)^2},$$</code>
where <code>\(\Cor\)</code>, <code>\(\Cov\)</code>, and <code>\(\Var\)</code> are the empirical correlation, covariance, and variance operators.
This expression implies that <code>cor(alpha, beta)</code> and <code>cor(a, b)</code> will be equal if the <code>\(a_i\)</code> and <code>\(b_i\)</code> values have the same means and variances.
We can achieve this by scaling <code>a</code> and <code>b</code> before computing their correlation:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">cor</span><span class="p">(</span><span class="nf">scale</span><span class="p">(</span><span class="n">a</span><span class="p">),</span> <span class="nf">scale</span><span class="p">(</span><span class="n">b</span><span class="p">))</span>
</code></pre></div><pre><code>## [1] 0.4326075
</code></pre><p>The <code>scale</code> function de-means its argument and scales it to have unit variance.
These operations don’t change the correlation of <code>a</code> and <code>b</code>.
But they <em>do</em> change the correlation of <code>alpha</code> and <code>beta</code>:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">alpha</span> <span class="o">=</span> <span class="nf">c</span><span class="p">(</span><span class="nf">scale</span><span class="p">(</span><span class="n">a</span><span class="p">),</span> <span class="nf">scale</span><span class="p">(</span><span class="n">b</span><span class="p">))</span>
<span class="n">beta</span> <span class="o">=</span> <span class="nf">c</span><span class="p">(</span><span class="nf">scale</span><span class="p">(</span><span class="n">b</span><span class="p">),</span> <span class="nf">scale</span><span class="p">(</span><span class="n">a</span><span class="p">))</span>
<span class="nf">cor</span><span class="p">(</span><span class="n">alpha</span><span class="p">,</span> <span class="n">beta</span><span class="p">)</span>
</code></pre></div><pre><code>## [1] 0.4326075
</code></pre><p>Now the two correlations agree!</p>
<p>I came across this phenomenon while writing <a href="https://bldavies.com/blog/friendship-paradox/">my previous post</a>, in which I discuss the degree <a href="https://bldavies.com/blog/assortative-mixing/">assortativity</a> among nodes in <a href="https://en.wikipedia.org/wiki/Zachary's_karate_club">Zachary’s (1977) karate club network</a>.
One way to measure this assortativity is to use the <code>degree_assortativity</code> function in <a href="https://igraph.org/">igraph</a>:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">igraph</span><span class="p">)</span>
<span class="n">G</span> <span class="o">=</span> <span class="nf">graph.famous</span><span class="p">(</span><span class="s">'Zachary'</span><span class="p">)</span>
<span class="nf">assortativity_degree</span><span class="p">(</span><span class="n">G</span><span class="p">)</span>
</code></pre></div><pre><code>## [1] -0.4756131
</code></pre><p>This function returns the correlation of the degrees of adjacent nodes in <code>G</code>.
Another way to compute this correlation is to</p>
<ol>
<li>construct a matrix <code>el</code> in which rows correspond to edges and columns list incident nodes;</li>
<li>define the vectors <code>d1</code> and <code>d2</code> of degrees among the nodes listed in <code>el</code>;</li>
<li>compute the correlation of <code>d1</code> and <code>d2</code> using <code>cor</code>.</li>
</ol>
<p>Here’s what I get when I take those three steps:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">el</span> <span class="o">=</span> <span class="nf">as_edgelist</span><span class="p">(</span><span class="n">G</span><span class="p">)</span>
<span class="n">d</span> <span class="o">=</span> <span class="nf">degree</span><span class="p">(</span><span class="n">G</span><span class="p">)</span>
<span class="n">d1</span> <span class="o">=</span> <span class="n">d[el[</span><span class="p">,</span> <span class="m">1</span><span class="n">]]</span> <span class="c1"># Ego degrees</span>
<span class="n">d2</span> <span class="o">=</span> <span class="n">d[el[</span><span class="p">,</span> <span class="m">2</span><span class="n">]]</span> <span class="c1"># Alter degrees</span>
<span class="nf">cor</span><span class="p">(</span><span class="n">d1</span><span class="p">,</span> <span class="n">d2</span><span class="p">)</span>
</code></pre></div><pre><code>## [1] -0.4769563
</code></pre><p>Notice that <code>cor(d1, d2)</code> disagrees with the value of <code>assortativity_degree(G)</code> computed above.
This is because the vectors <code>d1</code> and <code>d2</code> have different means and variances:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">c</span><span class="p">(</span><span class="nf">mean</span><span class="p">(</span><span class="n">d1</span><span class="p">),</span> <span class="nf">mean</span><span class="p">(</span><span class="n">d2</span><span class="p">))</span>
</code></pre></div><pre><code>## [1] 7.487179 8.051282
</code></pre><div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">c</span><span class="p">(</span><span class="nf">var</span><span class="p">(</span><span class="n">d1</span><span class="p">),</span> <span class="nf">var</span><span class="p">(</span><span class="n">d2</span><span class="p">))</span>
</code></pre></div><pre><code>## [1] 25.94139 32.23110
</code></pre><p>These differences come from <code>el</code> listing each edge only once: it includes a row <code>c(i, j)</code> for the edge between nodes <code>\(i\)</code> and <code>\(j\not=i\)</code>, but not a row <code>c(j, i)</code>.
Whereas <code>assortativity_degree</code> accounts for edges being undirected by adding the row <code>c(j, i)</code> before computing the correlation.
This is analogous to the “append the mirrored data” step I took to create <code>\((\alpha_i,\beta_i)_{i=1}^{2n}\)</code> above.
Appending the mirror of <code>el</code> to itself before computing <code>cor(d1, d2)</code> returns the same value as <code>assortativity_degree(G)</code>:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">el</span> <span class="o">=</span> <span class="nf">rbind</span><span class="p">(</span>
<span class="n">el</span><span class="p">,</span>
<span class="nf">matrix</span><span class="p">(</span><span class="nf">c</span><span class="p">(</span><span class="n">el[</span><span class="p">,</span> <span class="m">2</span><span class="n">]</span><span class="p">,</span> <span class="n">el[</span><span class="p">,</span> <span class="m">1</span><span class="n">]</span><span class="p">),</span> <span class="n">ncol</span> <span class="o">=</span> <span class="m">2</span><span class="p">)</span> <span class="c1"># el's mirror</span>
<span class="p">)</span>
<span class="n">d1</span> <span class="o">=</span> <span class="n">d[el[</span><span class="p">,</span> <span class="m">1</span><span class="n">]]</span>
<span class="n">d2</span> <span class="o">=</span> <span class="n">d[el[</span><span class="p">,</span> <span class="m">2</span><span class="n">]]</span>
<span class="nf">c</span><span class="p">(</span><span class="nf">assortativity_degree</span><span class="p">(</span><span class="n">G</span><span class="p">),</span> <span class="nf">cor</span><span class="p">(</span><span class="n">d1</span><span class="p">,</span> <span class="n">d2</span><span class="p">))</span>
</code></pre></div><pre><code>## [1] -0.4756131 -0.4756131
</code></pre>The friendship paradox
https://bldavies.com/blog/friendship-paradox/
Wed, 16 Nov 2022 00:00:00 +0000https://bldavies.com/blog/friendship-paradox/<p>People tend to be less popular than their friends.
This <a href="https://en.wikipedia.org/wiki/Friendship_paradox">paradox</a>, first observed by <a href="https://doi.org/10.1086/229693">Feld (1991)</a>, is due to popular people appearing on many friend lists.</p>
<p>For example, consider the social network among members of a karate club studied by <a href="https://doi.org/10.1086/jar.33.4.3629752">Zachary (1977)</a>:</p>
<p><img src="figures/zachary-1.svg" alt=""></p>
<p>The network contains <code>\(n=34\)</code> nodes with mean degree
<code>$$\DeclareMathOperator{\Corr}{Corr} \DeclareMathOperator{\Cov}{Cov} \DeclareMathOperator{\E}{E} \DeclareMathOperator{\Var}{Var} \E[d_i]\equiv\frac{1}{n}\sum_{i=1}^nd_i=4.59,$$</code>
where <code>\(\E\)</code> takes expected values across nodes and <code>\(d_i\)</code> is the degree of node <code>\(i\)</code>.
If <code>\(N_i\)</code> denotes the set of <code>\(i\)</code>'s neighbors, then the mean degree among those neighbors equals
<code>$$f_i\equiv \frac{1}{d_i}\sum_{j\in N_i}d_j.$$</code>
The friendship paradox states that <code>\(\E[d_i]\le\E[f_i]\)</code> in <em>any</em> network.
In Zachary’s network we have <code>\(\E[f_i]=9.61\)</code>, about twice the mean degree.</p>
<p>We can approximate <code>\(\E[f_i]\)</code> using the following procedure:</p>
<ol>
<li>Choose a stub (i.e., the endpoint of an edge) uniformly at random.</li>
<li>Record the degree of the chosen stub.</li>
</ol>
<p>Repeating these steps many times yields a degree distribution that over-samples from high-degree nodes.
The mean of this distribution answers the following question: “How many friends does a typical friend have?”
The probability of choosing node <code>\(i\)</code> in the first step equals
<code>$$p_i\equiv \frac{d_i}{\sum_{j=1}^nd_j},$$</code>
the proportion of stubs that <code>\(i\)</code> adds to the network.
Using the probabilities <code>\(p_i\)</code> to compute the expected value of the degrees <code>\(d_i\)</code> yields an approximation
<code>$$\begin{align} \widehat{\E[f_i]} &= \sum_{i=1}^np_id_i \\ &= \sum_{i=1}^n\left(\frac{d_i}{\sum_{j=1}^nd_j}\right)d_i \\ &= \frac{\sum_{i=1}^nd_i^2}{\sum_{j=1}^nd_j} \\ &= \frac{\E[d_i^2]}{\E[d_i]} \\ &= \E[d_i]+\frac{\Var(d_i)}{\E[d_i]} \end{align}$$</code>
of <code>\(\E[f_i]\)</code>.
Notice that if <code>\(\Var(d_i)=0\)</code> then <code>\(\widehat{\E[f_i]}=\E[d_i]\)</code>; in that case, everyone has the same degree as their friends, and so there is no friendship paradox.
The difference between the mean degree <code>\(\E[d_i]\)</code> and the typical friend’s degree <code>\(\widehat{\E[f_i]}\)</code> grows as the variance in degrees grows.</p>
<p>The approximation <code>\(\widehat{\E[f_i]}\)</code> is closest to <code>\(\E[f_i]\)</code> when there is no <a href="https://bldavies.com/blog/assortative-mixing/">assortative mixing</a> with respect to degree.
Then the <code>\(d_i\)</code> are uncorrelated with the <code>\(f_i\)</code>.
But this isn’t true in Zachary’s network:</p>
<p><img src="figures/zachary-degrees-1.svg" alt=""></p>
<p>Indeed, in Zachary’s network we have <code>\(\widehat{\E[f_i]}=7.77\)</code>, which is smaller than the true value <code>\(\E[f_i]=9.61\)</code>.
To see why, notice that
<code>$$\begin{align} \E[d_if_i] &= \frac{1}{n}\sum_{i=1}^nd_if_i \\ &= \frac{1}{n}\sum_{i=1}^n\sum_{j\in N_i}d_j \\ &\overset{\star}{=}\frac{1}{n}\sum_{j=1}^nd_j^2 \\ &= \E[d_j^2], \end{align}$$</code>
where <code>\(\star\)</code> holds because <code>\(j\)</code> appears in <code>\(d_j\)</code> neighborhoods <code>\(N_i\)</code>.
But
<code>$$\E[d_if_i]=\E[d_if_i]+\Cov(d_i,f_i)$$</code>
by the definition of covariance, from which it follows that
<code>$$\widehat{\E[f_i]}=\E[f_i]+\frac{\Cov(d_i,f_i)}{\E[d_i]}.$$</code>
Thus <code>\(\widehat{\E[f_i]}\)</code> under-estimates <code>\(\E[f_i]\)</code> in Zachary’s network because <code>\(\Cov(d_i,f_i)=-8.45\)</code> is negative.</p>
<p>The value of <code>\(\widehat{\E[f_i]}\)</code> depends only on the mean and variance of degrees, and not the correlation of degrees across adjacent nodes.
Thus <code>\(\widehat{\E[f_i]}\)</code> is invariant to <a href="https://bldavies.com/blog/degree-preserving-randomisation/">degree-preserving randomizations</a> (DPRs).
But <code>\(\E[f_i]\)</code> can vary under DPRs because they can change the correlation of adjacent nodes’ degrees.
For example, consider the three networks shown below:</p>
<p><img src="figures/dpr-example-1.svg" alt=""></p>
<p>The networks <code>\(G_1\)</code>, <code>\(G_2\)</code>, and <code>\(G_3\)</code> have the same degree distributions.
As a result, they have the same mean degrees <code>\(\E[d_i]\)</code> and approximations <code>\(\widehat{\E[f_i]}\)</code> of <code>\(\E[f_i]\)</code>.
But the true values of <code>\(\E[f_i]\)</code> differ because the correlations <code>\(\Corr(d_i,f_i)\)</code> differ:</p>
<table>
<thead>
<tr>
<th align="center">Network</th>
<th align="center"><code>\(\E[d_i]\)</code></th>
<th align="center"><code>\(\widehat{\E[f_i]}\)</code></th>
<th align="center"><code>\(\E[f_i]\)</code></th>
<th align="center"><code>\(\Corr(d_i,f_i)\)</code></th>
</tr>
</thead>
<tbody>
<tr>
<td align="center"><code>\(G_1\)</code></td>
<td align="center">1.43</td>
<td align="center">1.6</td>
<td align="center">1.43</td>
<td align="center">1.00</td>
</tr>
<tr>
<td align="center"><code>\(G_2\)</code></td>
<td align="center">1.43</td>
<td align="center">1.6</td>
<td align="center">1.57</td>
<td align="center">0.20</td>
</tr>
<tr>
<td align="center"><code>\(G_3\)</code></td>
<td align="center">1.43</td>
<td align="center">1.6</td>
<td align="center">1.71</td>
<td align="center">-0.91</td>
</tr>
</tbody>
</table>
<p>The network <code>\(G_1\)</code> is perfectly assortative with respect to degree, so <code>\(\widehat{\E[f_i]}\)</code> over-estimates <code>\(\E[f_i]\)</code>.
Whereas <code>\(G_3\)</code> is dis-assortative with respect to degree, so <code>\(\widehat{\E[f_i]}\)</code> under-estimates <code>\(\E[f_i]\)</code>.
The network <code>\(G_2\)</code> is relatively unsorted, so <code>\(\widehat{\E[f_i]}\)</code> is close to <code>\(\E[f_i]\)</code>.</p>
Binary distributions and risky gambles
https://bldavies.com/blog/binary-distributions-risky-gambles/
Sun, 13 Nov 2022 00:00:00 +0000https://bldavies.com/blog/binary-distributions-risky-gambles/<p>This post shows how binary random variables can be defined by their mean, variance, and skewness.
I use this fact to explain why variance does not (always) measure “riskiness.”</p>
<p>Suppose I’m defining a random variable <code>\(X\)</code>.
It takes value <code>\(H\)</code> or <code>\(L<H\)</code>, with <code>\(\Pr(X=H)=p\)</code>.
I want <code>\(X\)</code> to have mean <code>\(\mu\)</code>, variance <code>\(\sigma^2\)</code>, and <a href="https://en.wikipedia.org/wiki/Skewness#Fisher's_moment_coefficient_of_skewness">skewness coefficient</a>
<code>$$\DeclareMathOperator{\E}{E} s\equiv\E\left[\left(\frac{X-\mu}{\sigma}\right)^3\right].$$</code>
The target parameters <code>\((\mu,\sigma,s)\)</code> uniquely determine <code>\((H,L,p)\)</code> via
<code>$$\begin{align} H &= \mu+\frac{s+\sqrt{s^2+4}}{2}\sigma \\ L &= \mu+\frac{s-\sqrt{s^2+4}}{2}\sigma \\ p &= \frac{2}{4+s\left(s+\sqrt{s^2+4}\right)}. \end{align}$$</code></p>
<p>For example, if I want <code>\(X\)</code> to be symmetric (i.e., to have <code>\(s=0\)</code>) then I have to choose <code>\((H,L,p)=(\mu+\sigma,\mu-\sigma,0.5)\)</code>.
Increasing the target skewness <code>\(s\)</code> makes the upside <code>\((H-\mu)\)</code> larger but less likely, and the downside <code>\((\mu-L)\)</code> smaller but more likely:</p>
<p><img src="figures/required-values-1.svg" alt=""></p>
<p>This mapping between <code>\((\mu,\sigma,s)\)</code> and <code>\((H,L,p)\)</code> is useful for generating examples of “risky” gambles.
Intuition suggests that a gamble is less risky if its payoffs have lower variance.
But <a href="https://doi.org/10.1016/0022-0531%2870%2990038-4">Rothschild and Stiglitz (1970)</a> define a gamble <code>\(A\)</code> to be less risky than gamble <code>\(B\)</code> if every <a href="https://en.wikipedia.org/wiki/Risk_aversion">risk averse</a> decision-maker (DM) prefers <code>\(A\)</code> to <code>\(B\)</code>.
These two definitions of “risky” agree when</p>
<ol>
<li>payoffs are normally distributed, or</li>
<li>DMs have quadratic utility functions.</li>
</ol>
<p>Under those conditions, DMs’ expected utility depends only on the payoffs’ mean and variance.
But if neither condition holds then DMs also care about payoffs’ skewness.
We can demonstrate this using binary gambles.
Consider these three:</p>
<ul>
<li>Gamble <code>\(A\)</code>'s payoffs have mean <code>\(\mu_A=10\)</code>, variance <code>\(\sigma_A^2=36\)</code>, and skewness <code>\(s_A=0\)</code>;</li>
<li>Gamble <code>\(B\)</code>'s payoffs have mean <code>\(\mu_B=10\)</code>, variance <code>\(\sigma_B^2=144\)</code>, and skewness <code>\(s_B=5\)</code>;</li>
<li>Gamble <code>\(C\)</code>'s payoffs have mean <code>\(\mu_C=10\)</code>, variance <code>\(\sigma_C^2=9\)</code>, and skewness <code>\(s_C=-3\)</code>.</li>
</ul>
<p>The means are the same but the distributions are different.
Gamble <code>\(i\in\{A,B,C\}\)</code> gives me a random payoff <code>\(X_i\)</code>, which equals <code>\(H_i\)</code> with probability <code>\(p_i\)</code> and <code>\(L_i\)</code> otherwise.
We can compute the <code>\((H_i,L_i,p_i)\)</code> using the target parameters <code>\((\mu_i,\sigma_i,s_i)\)</code> and the formulas above:</p>
<table>
<thead>
<tr>
<th align="center">Gamble <code>\(i\)</code></th>
<th align="center"><code>\(H_i\)</code></th>
<th align="center"><code>\(L_i\)</code></th>
<th align="center"><code>\(p_i\)</code></th>
</tr>
</thead>
<tbody>
<tr>
<td align="center"><code>\(A\)</code></td>
<td align="center">16.00</td>
<td align="center">4.00</td>
<td align="center">0.50</td>
</tr>
<tr>
<td align="center"><code>\(B\)</code></td>
<td align="center">72.31</td>
<td align="center">7.69</td>
<td align="center">0.04</td>
</tr>
<tr>
<td align="center"><code>\(C\)</code></td>
<td align="center">10.91</td>
<td align="center">0.09</td>
<td align="center">0.92</td>
</tr>
</tbody>
</table>
<p>Gamble <code>\(A\)</code> offers a symmetric payoff: its upside <code>\((H_A-\mu_A)\)</code> and downside <code>\((\mu_A-L_A)\)</code> are equally large and equally likely.
Gamble <code>\(B\)</code> offers a positively skewed payoff: a large but unlikely upside, and a small but likely downside.
Gamble <code>\(C\)</code> offers a negatively skewed payoff: a small but likely upside, and a large but unlikely downside.</p>
<p>These upsides and downsides affect my preferences over gambles.
Suppose I get utility <code>\(u(x)\equiv\log(x)\)</code> from receiving payoff <code>\(x\)</code>.
Then gamble <code>\(A\)</code> gives me expected utility
<code>$$\begin{align} \E[u(X_A)] &\equiv p_Au(H_A)+(1-p_A)u(L_A) \\ &= 0.5\log(16)+(1-0.5)\log(4) \\ &= 2.08, \end{align}$$</code>
while <code>\(B\)</code> gives me <code>\(\E[u(X_B)]=2.12\)</code> and <code>\(C\)</code> gives me <code>\(\E[u(X_C)]=1.99\)</code>.
So I prefer gamble <code>\(B\)</code> to <code>\(A\)</code>, even though <code>\(B\)</code>'s payoffs have four times the variance of <code>\(A\)</code>'s.
I also prefer <code>\(B\)</code> to <code>\(C\)</code>, even though <code>\(B\)</code>'s payoffs have <em>sixteen</em> times the variance of <code>\(C\)</code>'s.
How can I be risk averse—that is, have a concave utility function—but prefer gambles with higher variance?
The answer is that I also care about skewness: I prefer gambles with large upsides and small downsides.
These “sides” of risk are not captured by variance.</p>
<p>So is gamble <code>\(C\)</code> “riskier” than gambles <code>\(A\)</code> and <code>\(B\)</code>?
Rothschild and Stiglitz wouldn’t say so.
To see why, suppose my friend has utility function <code>\(v(x)=\sqrt{x}\)</code>.
Then gamble <code>\(A\)</code> gives him expected utility <code>\(\E[v(X_A)]=3\)</code>, while <code>\(B\)</code> gives him <code>\(\E[v(X_B)]=2.98\)</code> and <code>\(C\)</code> gives him <code>\(\E[v(X_C)]=3.05\)</code>.
My friend and I have <em>opposite</em> preferences: he prefers <code>\(C\)</code> to <code>\(A\)</code> to <code>\(B\)</code>, whereas I prefer <code>\(B\)</code> to <code>\(A\)</code> to <code>\(C\)</code>.
But we’re both risk averse: our utility functions are both concave!
Thus, it isn’t true that <em>every</em> risk-averse decision-maker prefers <code>\(A\)</code> or <code>\(B\)</code> to <code>\(C\)</code>.
Different risk-averse DMs have different preference rankings.
This makes the three gambles incomparable under Rothschild and Stiglitz’s definition of “risky.”</p>
Estimating treatment effects with OLS
https://bldavies.com/blog/estimating-treatment-effects-ols/
Sat, 12 Nov 2022 00:00:00 +0000https://bldavies.com/blog/estimating-treatment-effects-ols/<p>A crop farmer wonders if he should use a new fertilizer.
He asks his peers what fertilizer they use and what are their annual yields.
He notices that some have different soil.
“That’s annoying,” the farmer thinks.
“If we all had the same soil, then I could estimate the benefit of using the new fertilizer by comparing the mean yields among farmers who do and don’t use it.
But now I have to control for soil too!”</p>
<p>Thankfully the farmer learned about <a href="https://en.wikipedia.org/wiki/Ordinary_least_squares">ordinary least squares</a> in his youth.
He remembers that he can control for variables by including them in a regression equation.
He posits a linear model
<code>$$\text{yield}=\beta_1\text{fert}+\beta_2\text{soil}+\epsilon,$$</code>
where</p>
<ul>
<li><code>\(\text{fert}\)</code> indicates using the new fertilizer,</li>
<li><code>\(\text{soil}\)</code> indicates having a different soil,</li>
<li><code>\(\beta_1\)</code> and <code>\(\beta_2\)</code> are the average marginal effects of changing fertilizers and soils, and</li>
<li><code>\(\epsilon\)</code> is an iid random error.</li>
</ul>
<p>The farmer estimates <code>\(\beta_1\)</code> and <code>\(\beta_2\)</code> using OLS, and gets the following results:</p>
<table>
<thead>
<tr>
<th align="center">Coefficient</th>
<th align="center">Estimate</th>
<th align="center">Std. error</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center"><code>\(\beta_1\)</code></td>
<td align="center">0.787</td>
<td align="center">0.210</td>
</tr>
<tr>
<td align="center"><code>\(\beta_2\)</code></td>
<td align="center">1.013</td>
<td align="center">0.211</td>
</tr>
</tbody>
</table>
<p>The farmer’s daughter enters his office.
She looks at his estimates and asks, “why don’t you just compare the mean yields among farmers with the same soil as you?
That seems less complicated than OLS.”
The farmer agrees.
He computes the conditional means
<code>$$\mu_{10}\equiv\mathrm{E}[\text{yield}\mid\text{fert}=1\ \text{and}\ \text{soil}=0]$$</code>
and
<code>$$\mu_{00}\equiv\mathrm{E}[\text{yield}\mid\text{fert}=0\ \text{and}\ \text{soil}=0]$$</code>
in his data, and finds that <code>\(\mu_{10}-\mu_{00}=0.965\)</code>.
This surprises the farmer:
“I thought OLS controlled for variation in soil.
I expected it to give me the same result as computing the difference in conditional means.
But it doesn’t.
Why not?”</p>
<p>The farmer has an idea:
“What if I include an interaction term?”
He posits an extended model
<code>$$\text{yield}=\gamma_1\text{fert}+\gamma_2\text{soil}+\gamma_3(\text{fert}\cdot\text{soil})+\epsilon,$$</code>
estimates it via OLS, and gets the following results:</p>
<table>
<thead>
<tr>
<th align="center">Coefficient</th>
<th align="center">Estimate</th>
<th align="center">Std. error</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center"><code>\(\gamma_1\)</code></td>
<td align="center">0.965</td>
<td align="center">0.290</td>
</tr>
<tr>
<td align="center"><code>\(\gamma_2\)</code></td>
<td align="center">1.208</td>
<td align="center">0.303</td>
</tr>
<tr>
<td align="center"><code>\(\gamma_3\)</code></td>
<td align="center">-0.377</td>
<td align="center">0.422</td>
</tr>
</tbody>
</table>
<p>“Interesting,” he thinks.
“OLS gives me the difference in conditional means if I include an interaction term, but not if I don’t.
I wonder what’s going on?”</p>
<p>What’s going on is that <code>\(\beta_1\)</code> and <code>\(\gamma_1\)</code> measure different things.
The latter measures the average effect of using the new fertilizer <em>without changing</em> soils.
Thus <code>\(\gamma_1=\mu_{10}-\mu_{00}\)</code> by definition.
Whereas <code>\(\beta_1\)</code> measures the average effect of using the new fertilizer <em>across all</em> soils.
Thus
<code>$$\beta_1=(1-p)\left(\mu_{10}-\mu_{00}\right)+p\left(\mu_{11}-\mu_{01}\right),$$</code>
where <code>\(p=\Pr(\text{soil}=1)\)</code> is the share of the farmer’s peers who have a different soil, and
<code>$$\mu_{fs}\equiv\mathrm{E}[\text{yield}\mid\text{fert}=f\ \text{and}\ \text{soil}=s]$$</code>
is the mean yield among peers with <code>\(\text{fert}=f\in\{0,1\}\)</code> and <code>\(\text{soil}=s\in\{0,1\}\)</code>.
The farmer’s data has <code>\(p=0.47\)</code> and <code>\(\mu_{11}-\mu_{01}=0.587\)</code>, giving
<code>$$\beta_1=(1-0.47)\times0.965+0.47\times0.587=0.787$$</code>
as in the first table above.</p>
<p>The OLS estimates of <code>\(\beta_1\)</code> and <code>\(\gamma_1\)</code> differ whenever the effect of using the new fertilizer varies across soils; that is, whenever <code>\(\gamma_3\not=0\)</code> in the true model.
But they can also differ when <code>\(\gamma_3=0\)</code> due to sampling variation.
For example, suppose the true model is
<code>$$\text{yield}=\text{fert}+\text{soil}+\epsilon,$$</code>
where <code>\(\text{fert}\)</code> and <code>\(\text{soil}\)</code> are independent, and where <code>\(\epsilon\)</code> is iid normally distributed.
The differences <code>\((\mu_{10}-\mu_{00})\)</code> and <code>\((\mu_{11}-\mu_{01})\)</code> in conditional means can differ in small samples because <code>\(\text{soil}\)</code> and <code>\(\epsilon\)</code> can be correlated by chance.
But this <a href="https://en.wikipedia.org/wiki/Spurious_relationship">spurious correlation</a> disappears as the sample grows, making <code>\(\beta_1\)</code> and <code>\(\gamma_1\)</code> converge.
I demonstrate this convergence in the table below.
It shows the mean absolute difference between <code>\(\beta_1\)</code> and <code>\(\gamma_1\)</code> across many samples of increasing size <code>\(n\)</code>:</p>
<table>
<thead>
<tr>
<th align="center"><code>\(n\)</code></th>
<th align="center"><code>\(\mathrm{E}\left[\lvert\beta_1-\gamma_1\rvert\right]\)</code></th>
</tr>
</thead>
<tbody>
<tr>
<td align="center">100</td>
<td align="center">0.160</td>
</tr>
<tr>
<td align="center">1,000</td>
<td align="center">0.050</td>
</tr>
<tr>
<td align="center">10,000</td>
<td align="center">0.014</td>
</tr>
</tbody>
</table>
<hr>
<p><em>Thanks to Anirudh Sankar for reading a draft version of this post.</em></p>
Why do experts give simple advice?
https://bldavies.com/blog/why-experts-give-simple-advice/
Sun, 25 Sep 2022 00:00:00 +0000https://bldavies.com/blog/why-experts-give-simple-advice/<p>One of the requirements for <a href="https://bldavies.com/blog/stanford/">my PhD program</a> is to write a “second-year paper.”
You can read mine <a href="https://arxiv.org/abs/2209.11710v1">here</a>.
It discusses how career concerns impact the type of advice that experts provide.
I consider two types of advice:</p>
<ol>
<li>“Simple” advice of the form “take this action;”</li>
<li>“Complex” advice of the form “take this action under these conditions.”</li>
</ol>
<p>Including conditions makes the expert seem more confident his advice is correct.
This hurts his reputation if his advice turns out to be <em>incorrect</em>.
Then the advisee <a href="https://bldavies.com/blog/learning-noisy-signals/">infers</a> that the expert is incompetent.
She says, “most wrong experts are incompetent.
You’re wrong, so you’re probably incompetent.
You’re fired!”</p>
<p>The expert can avoid this fate by “simplifying” his advice: by excluding relevant conditions.
This makes the advice worse but prevents the advisee from learning about the expert’s competence.
It insures him against the risk of losing his job.</p>
<p><a href="https://arxiv.org/abs/2209.11710v1">The paper</a> formalizes this argument.
It explores how the expert’s choice between simple and complex advice depends on his incentives.
It explains my answer to the titular question: experts give simple advice to avoid being “confidently wrong.”</p>
Dollar cost averaging
https://bldavies.com/blog/dollar-cost-averaging/
Sat, 17 Sep 2022 00:00:00 +0000https://bldavies.com/blog/dollar-cost-averaging/<p><a href="https://en.wikipedia.org/wiki/Dollar_cost_averaging">Dollar cost averaging</a> (DCA) is a way to split a lump sum investment into many smaller investments.
It involves regular purchases of a fixed <em>value</em> (rather than <em>quantity</em>) of shares.
This leads to buying more shares when their price is low and fewer when their price is high.
DCA is less risky than investing the lump sum because:</p>
<ol>
<li>it reduces the chance of buying lots of shares before their price rises or falls;</li>
<li>it reduces the time that invested cash spends earning capital gains and losses.</li>
</ol>
<p>But DCA is also less rewarding if prices trend upward because uninvested cash does not earn capital gains.
In that case, choosing between DCA and lump sum investment requires trading off risks and rewards.</p>
<p>For example, suppose I have some cash to invest in a market index: the <a href="https://en.wikipedia.org/wiki/S%26P_500">S&P 500</a>.
Here’s how that index evolved over the past five years (based on week-closing values from <a href="https://fred.stlouisfed.org/series/SP500">FRED</a>):</p>
<p><img src="figures/sp500-series-1.svg" alt=""></p>
<p>The index grew overall, with a sharp drop at the start of the pandemic and slower drop at the start of this year.
The weekly return fluctuated around a mean of 0.2%:</p>
<p><img src="figures/sp500-returns-1.svg" alt=""></p>
<p>Let’s assume future weekly returns will follow this distribution.
Should I invest all my cash now (the “lump sum” strategy) or split it into equal weekly investments (the “weekly DCA” strategy)?
How about equal monthly investments (the “monthly DCA” strategy)?</p>
<p>We can answer these questions via simulation:<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup></p>
<ol>
<li>Sample 52 values from the S&P 500’s weekly return distribution.</li>
<li>Take the cumulative product of those returns to get a simulated price path.</li>
<li>Divide the cash invested each week by the simulated price for that week to get the number of shares bought that week.</li>
<li>Multiply the total number of shares bought by the price in the 52nd week to get the investments’ final value.</li>
<li>Divide the final value by the amount of cash invested to get the annual return.</li>
</ol>
<p>Repeating these five steps many times yields a distribution of annual returns offered by each strategy.
I compare those distributions in the table below, based on 1,000 simulated price paths.</p>
<table>
<thead>
<tr>
<th align="left">Strategy</th>
<th align="right">Mean</th>
<th align="right">Std. dev.</th>
<th align="right">Min.</th>
<th align="right">Max.</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Lump sum</td>
<td align="right">11.8%</td>
<td align="right">22.2%</td>
<td align="right">-46.1%</td>
<td align="right">95.0%</td>
</tr>
<tr>
<td align="left">Weekly DCA</td>
<td align="right">5.6%</td>
<td align="right">11.9%</td>
<td align="right">-26.1%</td>
<td align="right">56.2%</td>
</tr>
<tr>
<td align="left">Monthly DCA</td>
<td align="right">6.0%</td>
<td align="right">12.5%</td>
<td align="right">-27.0%</td>
<td align="right">57.3%</td>
</tr>
</tbody>
</table>
<p>The return on the lump sum strategy has the highest mean and variance.
Investing all my cash in the first week gives me more time “in the market” earning capital gains, but exposes me to lots of random gains and losses.
Investing in smaller chunks limits my exposure to gains and losses, narrowing the distribution of annual returns.</p>
<p>So, should I dollar cost average or not?
The answer depends on my risk tolerance.
If I don’t care about risk then I should choose the strategy with the highest mean return.
But if I’m risk averse then I need to paid a <a href="https://en.wikipedia.org/wiki/Risk_premium">risk premium</a>.
The more risk averse I am and the riskier the strategy, the higher the risk premium.
I should choose the strategy with the highest return net of its risk premium.
This net, “certainty-equivalent” (CE) return equals the return on a riskless strategy that makes me indifferent between using it and using the risky strategy.</p>
<p>For example, the chart below plots the CE return on each strategy when I have <a href="https://en.wikipedia.org/wiki/Risk_aversion#Relative_risk_aversion">constant relative risk aversion</a>.
When my risk aversion is low, I prefer investing the lump sum.
But when my risk aversion is high, I prefer investing in smaller chunks.
Weekly and monthly chunks appear to deliver similar CE returns in my simulations.</p>
<p><img src="figures/sp500-ce-returns-1.svg" alt=""></p>
<p>The risk aversion level that makes me prefer DCA depends on the asset I invest in.
For example, suppose I’d rather invest in <a href="https://en.wikipedia.org/wiki/Bitcoin">bitcoin</a>.
Its recent prices were much more volatile than the S&P 500 (according to week-closing values from <a href="https://finance.yahoo.com/quote/BTC-USD/history?p=BTC-USD">Yahoo Finance</a>):</p>
<p><img src="figures/bitcoin-series-1.svg" alt=""></p>
<p>Investing in bitcoin offered a mean weekly return of 1.3% in the past five years, six times that of the S&P 500.
But bitcoin’s returns were riskier: they had a standard deviation of 11.0%, whereas the S&P 500’s returns had a standard deviation of 2.8%.</p>
<p>The chart below compares the lump-sum, weekly DCA, and monthly DCA strategies for investing in bitcoin.
It shows the certainty-equivalent return on each strategy, based on 1,000 price paths simulated using the five steps described above.
My decision rule is the same as when investing in the S&P 500: use DCA if I’m sufficiently risk averse.
But the “sufficient” level of risk aversion for bitcoin is lower than for the S&P 500.
This is because bitcoin is riskier: its risk premium is a larger share of its mean return.</p>
<p><img src="figures/bitcoin-ce-returns-1.svg" alt=""></p>
<p>One benefit of DCA that my simulations don’t capture is its simplicity: I don’t have to think about <em>when</em> to invest the lump sum.
Indeed DCA removes the temptation to <a href="https://en.wikipedia.org/wiki/Market_timing">time the market</a> that leads many investors astray.</p>
<hr>
<p><em>Disclaimer: I am not a financial advisor and this post is not financial advice.
Do you own research on the investments that feel right to you.
Don’t invest money you can’t afford to lose.</em></p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>I assume my uninvested cash earns interest at the inflation rate.
This means I can treat the simulated prices as real.
I also assume there are no transaction costs or brokerage fees. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Homophily and the strength of moderate ties
https://bldavies.com/blog/homophily-strength-moderate-ties/
Fri, 16 Sep 2022 00:00:00 +0000https://bldavies.com/blog/homophily-strength-moderate-ties/<p>Yesterday <em>Science</em> published <a href="https://doi.org/10.1126/science.abl4476">a study</a> on social networks and job mobility.
It suggests that there’s a causal, “inverted U-shaped” relationship between</p>
<ol>
<li>the number of mutual friends you share with someone and</li>
<li>the probability that befriending them leads you to change jobs.</li>
</ol>
<p>The authors call for a theory to explain this relationship.
In fact it has a simple explanation: <a href="https://en.wikipedia.org/wiki/Homophily">homophily</a>.</p>
<p>People tend to have friends with similar interests.
Those interests influence the jobs we want and hear about.
If you and I have lots of mutual friends, then I probably hear about lots of job opportunities that interest you.
But you probably hear about those opportunities too because you <a href="https://bldavies.com/blog/echo-chambers-useful/">talk to the same people</a> and <a href="https://bldavies.com/blog/ideological-bias-trust-information-sources/">follow the same news sources</a>.
So befriending me is unlikely to impact your job mobility because the information I could give you is redundant.</p>
<p>The opposite is true if we have few mutual friends.
Then we probably hear about different job opportunities because we talk to different people and follow different news sources.
But few of the opportunities I hear about will interest you.
So befriending me is unlikely to impact your job mobility because the information I could give you is irrelevant.</p>
<p>Thus, homophily creates a trade-off between relevance and redundancy.
Befriending “strong ties” (i.e., people with lots of mutual connections) provides information that is relevant but redundant.
Befriending “weak ties” (i.e., people with few or no mutual connections) provides information that is irrelevant but novel.
Befriending “moderate” ties balances relevance and redundancy.
It lets you hear about opportunities you find interesting and wouldn’t hear otherwise.</p>
<p>If everyone acts on those opportunities, then we should see the relationship suggested by the <em>Science</em> study.</p>
Reflections on grad school: Years 1 and 2
https://bldavies.com/blog/reflections-grad-school-years-1-2/
Tue, 06 Sep 2022 00:00:00 +0000https://bldavies.com/blog/reflections-grad-school-years-1-2/<p>This post reflects on the first two years of <a href="https://bldavies.com/blog/stanford/">my economics PhD at Stanford University</a>.
I discuss my <a href="#first-year-courses">first</a>- and <a href="#second-year-courses">second</a>-year coursework, and my <a href="#quality-of-life">quality of life</a> as a grad student.</p>
<h2 id="first-year-courses">First-year courses</h2>
<p>I spent the first year taking the “core” micro, macro, and econometrics courses.
Most of their content was familiar from undergrad.
Some of my classmates waived out of courses they’d taken before.
I didn’t because I hadn’t.
I also wanted to be “in sync” with my classmates: working on the same problems, facing the same stresses, and celebrating the same milestones.</p>
<p>I found macro the most rewarding and metrics the least.
In macro I learned how to solve dynamic optimization problems.
I used that skill in a <a href="https://bldavies.com/blog/rationalizing-negative-splits/">blog post</a> and <a href="https://bldavies.com/research/D2.pdf">term paper</a> on non-macro topics.
In contrast, I’m about as good at econometrics as I was before starting my PhD.
I know more ways to compute standard errors, but I’ve still never run a diff-in-diff.</p>
<p>Most of our assessment was via problem sets.
They tended to focus on technical minutiae rather than fundamental insights.
I seldom found them educational.
Working in groups made them <em>less</em> educational.
In theory, group-work involved discussions that helped everyone learn.
In practice, it involved “dividing and conquering:” splitting problems among group members to work on alone.</p>
<p>We had no qualifying or in-person exams.
Some courses had final assignments, but we did them at home.
So I saw no reason to study.
Instead I waited until we got our assignments and learned only what I needed.
I didn’t want to waste time studying topics I didn’t care about.
And I never forgot what the department chair said on our first day:</p>
<blockquote>
<p>“Grades don’t matter.
What matters is whether you do good research.”</p>
</blockquote>
<p>One consequence of not studying was ending the year feeling like I knew <em>less</em> economics.
I gained more awareness than knowledge, so my ratio of “known knowns” to “known unknowns” fell.
But awareness is still useful: I know what keywords to search if I need to learn something in the future.</p>
<h2 id="second-year-courses">Second-year courses</h2>
<p>I kept taking courses in my second year.
But I got to choose my courses based on the fields I chose to specialize in.
I chose micro theory and behavioral economics.
Some of my reasons were:</p>
<ul>
<li>I like studying simple models of how people behave and interact;</li>
<li>I’d rather argue about <a href="https://bldavies.com/blog/judging-economic-models/">modeling assumptions</a> than external validity;</li>
<li>Theory and (especially) behavioral courses had the fewest assessments.</li>
</ul>
<p>I was scared about my career: the job market for theorists, especially behavioral theorists, is notoriously awful.
But I was <em>more</em> scared of doing research I didn’t enjoy.
That said, I viewed field choices as administrative only.
They didn’t have to confine my research.
Indeed I <a href="https://bldavies.com/blog/gender-sorting-economists/">published a paper</a> in my second year that was neither theoretical nor behavioral.</p>
<p>I also had to take “distribution” courses outside my chosen fields.
Mine were on market design, political economy, and economic history.
I attended some sociology classes because I wanted to meet non-economists who shared my interests.
I met some non-economists (including some who were anti-economists), but none shared my interests.
They also had very different definitions of “theory.”
But I enjoyed hearing their perspectives.</p>
<p>The purpose of the second year was to help us transition from being research consumers to producers.
Our assessments reflected that purpose.
They included referee reports, research proposals, and term papers.
Proposals were helpful for organizing and clarifying my ideas.
They weren’t helpful for prompting feedback: I submitted six proposals and got comments once.
Instead I got feedback from discussions with professors and classmates.
Those discussions made going to class worthwhile.</p>
<p>The best discussions were with people who challenged me to think harder.
For example, some professors were known to ask hard questions when students shared their ideas.
At first those professors seemed “scary.”
Eventually I realized that what made them scary was that they assumed I was intelligent.
They wouldn’t let me make hand-wavy arguments or think lazily.
I learned to admire those professors and gravitated to them.
Sometimes they told me my ideas were shallow or wrong.
But I’d rather be wrong in class than in print.</p>
<h2 id="quality-of-life">Quality of life</h2>
<p>People warned me that grad students have no free time.
That has not been my experience.
I’ve had plenty of time to exercise, blog, and be unproductive.
I had that time because I chose to minimize my coursework.
I made that choice because (i) grades don’t matter (see above), and (ii) I saw coursework as a barrier to doing research and enjoying my life.</p>
<p>People also warned me that grad students live in poverty.
Again, that has not been my experience.
Stanford pays enough that I can dine out occasionally (even at Palo Alto prices), and can eat more than beans and rice at home.
I can replace my running shoes and socks when they wear out.
I don’t have to worry about <a href="https://bldavies.com/blog/living-america/#healthcare">hospital bills</a>.
I feel <em>privileged</em> rather than poor.
Campus housing is expensive, but Stanford deducts rent from my stipend so I don’t notice they’re ripping me off.</p>
<p>In hindsight, I under-appreciated local amenities when I <a href="https://bldavies.com/blog/applying-economics-phd-programs/">applied to PhD programs</a>.
My other options were in Boston, Chicago, and New York.
Stanford definitely wins on the weather front: it’s always warm and dry here.
We don’t have Chicago’s bitter winters or the east coast’s humid summers.
I can go outside to unwind whenever I like.
If I couldn’t then I’d go insane.</p>
<p>But Stanford loses on the “fun place to live” front.
<a href="https://bldavies.com/blog/living-america/#its-always-sunny-in-palo-alto">Palo Alto is small and suburban</a>.
It lacks the energy and excitement found in big cities.
San Francisco is an hour away by train, which is fine for days out but a hassle for nights out.
I prefer running to drinking, so I’m willing to sacrifice bars for sun.
But that preference is endogenous.</p>
Paying for the truth
https://bldavies.com/blog/paying-truth/
Thu, 01 Sep 2022 00:00:00 +0000https://bldavies.com/blog/paying-truth/<p>In a <a href="https://bldavies.com/blog/truth-seekers-ideologues/">previous post</a>, I showed that if the truth doesn’t matter then I’m better off being an ideologue with ideological friends.
I discussed the trade-off between (i) experiencing reality and (ii) experiencing what my friends experience.
Truth-seeking made sense only when the benefit of (i) exceeded the cost of forgoing (ii).
This post discusses another cost of truth-seeking: having to pay—financially, cognitively, or emotionally—for information.</p>
<p>One way to model that cost is as follows.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>
Suppose the truth is determined by a random variable <code>\(\theta\in\{0,1\}\)</code>.
I learn about <code>\(\theta\)</code> by observing a signal <code>\(s(x)\in\{0,1\}\)</code> with precision
<code>$$\Pr(s(x)=\theta)=\frac{1+x}{2}.$$</code>
The parameter <code>\(x\in[0,1]\)</code> determines the signal’s quality.
If <code>\(x=1\)</code> then the signal is fully informative; if <code>\(x=0\)</code> then it is uninformative.</p>
<p>My prior estimate <code>\(\theta_0\in[0.5,1]\)</code> of <code>\(\theta\)</code> is based on no information; it reflects my ideology.
I use the realization of <code>\(s(x)\)</code> and my prior <code>\(\theta_0\)</code> to form a posterior estimate
<code>$$\hat\theta(s(x))=\Pr\left(\theta=1\,\vert\,s(x)\right)$$</code>
via <a href="https://bldavies.com/blog/learning-noisy-signals/">Bayes’ rule</a>.
I care about the <a href="https://en.wikipedia.org/wiki/Mean_squared_error">mean squared error</a>
<code>$$\newcommand{\E}{\mathrm{E}} \newcommand{\MSE}{\mathrm{MSE}} \MSE(x)=\E\left[\left(\theta-\hat\theta(s(x))\right)^2\right]$$</code>
of my posterior estimate, where <code>\(\E\)</code> is the expectation operator taken with respect to the joint distribution of <code>\(\theta\)</code> and <code>\(s(x)\)</code> given my prior <code>\(\theta_0\)</code>.
But I also care about the cost <code>\(cx\)</code> I endure from observing a signal of quality <code>\(x\)</code>.
This cost reflects the resources I use to seek the information and process it (e.g., money, time, and mental energy).
I choose the quality <code>\(x^*\)</code> that minimizes
<code>$$f(x)=\MSE(x)+cx.$$</code>
The chart below plots my objective <code>\(f(x)\)</code> against <code>\(x\)</code> when I have prior <code>\(\theta_0\in\{0.5,0.7,0.9\}\)</code> and face marginal cost <code>\(c\in\{0,0.1,0.2,0.3\}\)</code>.
Since <code>\(f\)</code> is concave in <code>\(x\)</code>, it has (constrained) local minima at <code>\(x=0\)</code> and <code>\(x=1\)</code>.
My choice between these minima depends on the value of <code>\(c\)</code>.
If it’s small then information is cheap and I “buy” as much as I can.
If it’s large then information is expensive and I don’t buy any.
But there’s no middle ground: I seek <em>all</em> the truth or none of it.</p>
<p><img src="figures/objectives-1.svg" alt=""></p>
<p>Let <code>\(c^*\)</code> be the threshold value of <code>\(c\)</code> at which I stop paying for information: the “choke price” of truth.
How does <code>\(c^*\)</code> depend on my prior <code>\(\theta_0\)</code>?
Intuitively, increasing <code>\(\theta_0\)</code> has two competing effects:</p>
<ol>
<li>it increases the error in my posterior estimate when <code>\(\theta=0\)</code>;</li>
<li>it increases my confidence that <code>\(\theta=1\)</code>.</li>
</ol>
<p>The first effect makes me <em>want more</em> information, increasing <code>\(c^*\)</code>.
The second effect makes me think I <em>need less</em> information, decreasing <code>\(c^*\)</code>.
The chart below shows that the second effect dominates.
The more ideological I am about the value of <code>\(\theta\)</code>, the cheaper the truth must be for me to seek it.
If I’m a pure ideologue (i.e., <code>\(\theta_0=1\)</code>) then I won’t seek the truth even if it’s free.</p>
<p><img src="figures/choke-prices-1.svg" alt=""></p>
<p>One reason the first effect might dominate is if I care about errors when <code>\(\theta=0\)</code> more than when <code>\(\theta=1\)</code>.
For example, if <code>\(\theta\)</code> indicates whether it will be sunny then I’d rather bring an umbrella I don’t use than be caught wearing flip-flops in the rain.
I can capture that asymmetry by replacing the MSE component of my objective with a weighted version
<code>$$\newcommand{\WMSE}{\mathrm{WMSE}} \WMSE(x)=\E\left[W(\theta)\cdot\left(\theta-\hat\theta(s(x))\right)^2\right],$$</code>
where the weighting function
<code>$$W(\theta)=\begin{cases} 1 & \text{if}\ \theta=1 \\ w & \text{if}\ \theta=0 \end{cases}$$</code>
has <code>\(w\ge1\)</code>.
Increasing <code>\(w\)</code> nudges my optimal posterior estimate towards zero because I want to avoid being “confidently wrong” when <code>\(\theta=0\)</code>.
Since <code>\(\WMSE(x)\)</code> is concave in <code>\(x\)</code>, I still optimally pay for all the truth or none of it.
But now the choke price <code>\(c^*\)</code> at which I stop paying for the truth depends on my prior <code>\(\theta_0\)</code> <em>and</em> the error weight <code>\(w\)</code>.</p>
<p>The chart below shows that <code>\(c^*\)</code> is non-monotonic in <code>\(\theta_0\)</code> when <code>\(w\)</code> is large.
This is due to the two competing effects described above.
The first effect dominates when <code>\(w\)</code> is large and my prior is low.
In that case, it’s really bad to be wrong and I’m not confident I’ll be right.
Whereas the second effect dominates when <code>\(w\)</code> is large and my prior is high.
In that case, I’m so confident I’ll be right that I don’t care what happens if I’m wrong.</p>
<p><img src="figures/choke-prices-weighted-1.svg" alt=""></p>
<p>This example raises a philosophical question: what does it mean for the estimate to be “wrong?”
For example, suppose I thought there was a 30% chance of rain.
If it rained, was I wrong?
What if I thought there was a 5% chance?
A 95% chance?
Where should I draw the line?
On those questions, I recommend Michael Lewis’ discussion with Nate Silver about 17 minutes into <a href="https://www.pushkin.fm/podcasts/against-the-rules/respect-the-polygon">this podcast episode</a>.</p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>See <a href="https://bldavies.com/blog/paying-precision">here</a> for my discussion of the case when <code>\(\theta\)</code> and <code>\(s\)</code> are normally distributed. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Why should academics blog?
https://bldavies.com/blog/academics-blog/
Sun, 31 Jul 2022 00:00:00 +0000https://bldavies.com/blog/academics-blog/<p>I published <a href="https://bldavies.com/blog/habitat-choices-first-generation-pokemon/">my first blog post</a> in March 2018.
Since then I’ve spent countless hours planning, drafting, and editing other posts.
Academics might think those hours were wasted:
“Why write blog posts when you could write research papers?
Blogging won’t get you citations or tenure!”
But I disagree with that criticism.
Blogging <em>complements</em> my research rather than substitutes for it.
Here are seven reasons:</p>
<ol>
<li>
<p><strong>Blogging can lead to papers</strong>.
<a href="https://bldavies.com/blog/policymaking-under-uncertainty/">My post on policymaking under uncertainty</a> inspired <a href="https://bldavies.com/blog/covid-19-lockdown-two-sided-uncertainty/">Arthur Grimes and my paper on COVID-19 lockdowns</a>.
Blogging about <a href="https://github.com/bldavies/nberwp/">nberwp</a> meant I understood the data and context enough to write <a href="https://bldavies.com/blog/gender-sorting-economists/">my paper on gender sorting among economists</a>.
Discussing the idea for a post with Adam Jaffe led to <a href="https://bldavies.com/blog/research-funding-collaboration/">our paper on research funding and collaboration</a>.</p>
</li>
<li>
<p><strong>Blogging increases my idea turnover</strong>.
I have lots of research ideas.
Some are worth pursuing and some are <a href="https://bldavies.com/blog/dead-ends/">dead ends</a>.
I sort ideas by “testing” them: writing down toy models or exploring relevant data.
Blogging lets me run those tests quickly and casually.
It also lets me share my tests with readers.
They can avoid dead ends I’ve reached, or salvage ideas I’ve abandoned if they see opportunities I don’t.</p>
</li>
<li>
<p><strong>Blogging promotes a creator mindset</strong>.
When I encounter a new idea, one of my first thoughts is “how could I make that a blog post?”
Blogging nudges me to think like a creator; to view ideas as opportunities to write something <a href="https://youtu.be/vtIzMaLkCaM?t=823">valuable</a>.
It also nudges me to focus on <em>output</em> as the source of value.
No matter how long I spend writing posts, no one can read and benefit from them if they’re still on my computer.
The goal is to publish.
Academics have a similar goal.</p>
</li>
<li>
<p><strong>Blogging improves my writing</strong>.
It gives me practice refining my ideas, (re)structuring arguments, and thinking about my audience.
Writing papers gives me similar practice, but blogging yields the benefits faster because I can write blog posts faster.</p>
</li>
<li>
<p><strong>Blogging helps me learn</strong>.
Most of my posts come from wanting to understand something.
Sometimes it’s a problem encountered in my research (e.g., <a href="https://bldavies.com/blog/dyadic-dependence">dyadic dependence</a> or <a href="https://bldavies.com/blog/understanding-selection-bias/">selection bias</a>).
Sometimes it’s a result from others’ research (e.g., on <a href="https://bldavies.com/blog/information-gerrymandering/">information gerrymandering</a> or <a href="https://bldavies.com/blog/improving-human-predictions/">modeling human predictions</a>).
Sometimes it’s a technical paper (e.g., on <a href="https://bldavies.com/blog/communicating-science/">communicating science</a> or <a href="https://bldavies.com/blog/research-incentives-evolution-knowledge/">research incentives</a>).
Writing blog posts makes me engage with ideas and explain them in my own words.</p>
</li>
<li>
<p><strong>Blogging helps me connect ideas</strong>.
Many of my posts build on previous posts.
Sometimes this is clear in advance (as with, e.g., my posts on <a href="https://bldavies.com/blog/stable-matchings/">stable matchings</a> with <a href="https://bldavies.com/blog/stable-matchings-noisy-preferences/">noisy</a> and <a href="https://bldavies.com/blog/stable-matchings-correlated-preferences/">correlated</a> preferences).
Sometimes I realize the connection between posts while writing them.
I love discovering how ideas are connected—indeed I’ve blogged about that <a href="https://bldavies.com/blog/college-degrees-similarity-measures/">here</a> and <a href="https://bldavies.com/blog/estimating-research-field-similarities/">here</a>—and view it as an essential research skill.
Blogging helps me practice that skill.</p>
</li>
<li>
<p><strong>Blogging is fun</strong>.
(Yes, academics can have fun!)
I enjoy thinking and writing.
Blogging is a way to think and write.
Most important, I can think and write about whatever I like—I don’t have to focus on topics that academics care about.
I can blog about <a href="https://bldavies.com/blog/birds-voting-russian-interference/">birds</a>, <a href="https://bldavies.com/blog/white-elephant-gift-exchanges/">gift exchanges</a>, and <a href="https://bldavies.com/blog/rationalizing-negative-splits/">running negative splits</a>.
I can even blog about <a href="https://bldavies.com/blog/habitat-choices-first-generation-pokemon">Pokémon</a>!
And I get the benefits of thinking and writing without the pressure of academic evaluation.</p>
</li>
</ol>
What's it like living in America?
https://bldavies.com/blog/living-america/
Tue, 26 Jul 2022 00:00:00 +0000https://bldavies.com/blog/living-america/<p>Last month I visited New Zealand for the first time since <a href="https://bldavies.com/blog/stanford/">moving to the USA</a>.
Lots of people asked what living in the USA is like.
Here’s what I told them:</p>
<h2 id="its-always-sunny-in-palo-alto">It’s always sunny in Palo Alto</h2>
<p>I live in <a href="https://en.wikipedia.org/wiki/Palo_Alto%2C_California">Palo Alto, California</a>.
It’s near San Francisco and part of Silicon Valley.
Palo Alto is officially a “city,” but it feels suburban: the streets are clean, there are trees everywhere, and most buildings are one or two stories.</p>
<p>Palo Alto has two main attractions.
One is the weather: the air is warm and dry, it seldom rains, and it never snows.
I don’t feel guilty about spending a nice day inside because <em>every</em> day is a nice day.
(In fact I have the opposite problem: I spend too much time outside running and cycling, and too little inside being productive.)</p>
<p>The other attraction is <a href="https://www.stanford.edu">Stanford University</a>, where I study.
Most of my interactions are with students and professors.
Yet Palo Alto doesn’t feel like a university town: it’s easier to find ice cream than beer, and everything is expensive.
The rent on my studio apartment here is more than what I paid for a two-bedroom apartment in the middle of <a href="https://en.wikipedia.org/wiki/Wellington">Wellington</a>.</p>
<p>Palo Alto is culturally diverse.
I hear foreign languages daily.
Most of my friends here are from South America or Europe.
Despite being used to hearing different accents, people have trouble with mine: they often think I’m named “Bin.”
(We all feel embarrassed when I have to spell my name, one of the simplest in the English language.)</p>
<p>In contrast, Palo Altoans seem politically homogeneous.
The Americans I know all vote Democrat; the non-Americans would if they could.
But people signal their politics in different ways.
Some decorate their lawns with “climate change is real” and “black lives matter” signs.
Others wear masks while walking alone outside or cycling without a helmet.</p>
<p>People here seem aware of, and concerned about, social issues plaguing the USA.
But they’re also insulated from such issues.
There are no riots.
There are few homeless and no (visible) guns.
No one looks obese.</p>
<p>Clearly Palo Alto is not representative of the USA.
Other areas have skyscrapers, snow, cheap drinks, Republicans, climate change deniers, and openly carried guns.
I’ve visited some cities on the east and west coasts, but nowhere in the south and almost nowhere rural.
So my perspective on living here is biased because my experience is biased.</p>
<p>But I don’t think <em>anywhere</em> is representative of the USA.
There’s so much variety in where people live, how they behave, and what they believe.
I didn’t appreciate that variety until moving here.
I thought of the USA similarly to New Zealand: I thought everyone was basically the same, with minor differences in wealth and lifestyle.
I thought wrong.</p>
<h2 id="hows-it-different">How’s it different?</h2>
<p>New Zealand and the USA differ in many ways.
Here are some of my observations:</p>
<h3 id="dining-out">Dining out</h3>
<p>When I read a restaurant menu in New Zealand, the price I see is the price I pay.
When I read one here, the price I see is about 80% of what I pay.
The last 20% comprises taxes and tips.
Taxes vary by product, store, and state.
Tips vary by (perceived) service quality and social norms.</p>
<p>The norm in Palo Alto is to tip 18% of the pre-tax price.
I’m not sure why the fee for shipping items from kitchen to table depends on the price of the cargo.
<a href="https://calave.com">One local menu</a> reads:</p>
<blockquote>
<p>We are a no tipping establishment.
20% service charge will be added to your bill to ensure a better living wage to our staff.</p>
</blockquote>
<p>They could just raise their pre-tax prices by 20%, but then they wouldn’t get to virtue signal.
At least they prompt people to multiply by 1.2 before choosing what to order.</p>
<h3 id="paying-taxes">Paying taxes</h3>
<p>I pay income tax to the state and federal governments.
I use third-party software to avoid the risk of committing fraud by mistake.
That risk exists because both governments already know my taxable income.
They could, like New Zealand’s tax office, just fill out my return and have me spend two minutes confirming it.
But then I wouldn’t be intimidated into paying an intermediary to organize my financial data.
I’m fortunate in that Stanford pays on my behalf.
Others in the USA are less fortunate.</p>
<h3 id="healthcare">Healthcare</h3>
<p>Stanford also pays for my health insurance.
I’d hate to be uninsured:
I broke my wrist last year, and my hospital and surgery fees totalled just under 100,000 USD (currently about 160,000 NZD).
But I had surgery just three days after my accident; in New Zealand I’d have paid almost nothing but waited weeks.
Sometimes you get what you pay for.</p>
<p>(As an aside:
My surgeon prescribed oxycodone, a pain-relieving opiate.
I paid 1.25 USD for 20 days’ worth.
That payment helped me understand why the USA has an <a href="https://en.wikipedia.org/wiki/Opioid_epidemic_in_the_United_States">opioid epidemic</a>.)</p>
<h3 id="talking-to-strangers">Talking to strangers</h3>
<p>In New Zealand it is (mostly) socially acceptable to talk to strangers.
People trust each other.
If you’re approached by someone you don’t know, they probably don’t want anything from you (other than, say, directions).
They usually just want to chat.</p>
<p>In the USA, it seems (mostly) socially <em>unacceptable</em> to talk to strangers.
People <em>don’t</em> trust each other.
If you’re approached by someone you don’t know, they probably want something from you.
They might want to chat, but only to build rapport before advancing their agenda.</p>
<h3 id="talking-generally">Talking generally</h3>
<p>Americans (and other non-New Zealanders) use “how are you” as a greeting rather than a question.
They don’t actually want to know; they just want you to say “good” or “fine,” and move on.
Replying “bad” would make the greeter’s day worse.
I hear “how are you” most often when walking past people I know.
They usually don’t stop and wait for an answer.
I’m still learning not to take offense.</p>
<p>Likewise I’m still adjusting to how Americans receive thanks.
New Zealanders always reply “you’re welcome.”
Americans always reply “sure” or “of course.”
I find those responses dismissive and rude.
They suggest my thanks were unnecessary and I’ve wasted my time offering them.
Whereas I think Americans want to avoid a sense of reliance: they don’t want me to think I owe them anything in return.</p>
<h3 id="scenery">Scenery</h3>
<p>Most people I’ve met know New Zealand is scenic and beautiful.
Scenery in the USA—at least, the scenery I’ve seen—is not as beautiful.
But the USA has something New Zealand doesn’t: <em>scale</em>.
<a href="https://en.wikipedia.org/wiki/Sequoioideae">Redwoods</a> are huge.
Big Sur is huge.
New York City’s skyline is huge.
New Zealand’s main scenic attractions—glaciers, lakes, and national parks—are not as huge.</p>
Research incentives and the evolution of knowledge
https://bldavies.com/blog/research-incentives-evolution-knowledge/
Fri, 22 Jul 2022 00:00:00 +0000https://bldavies.com/blog/research-incentives-evolution-knowledge/<p>Research is a cumulative process.
New discoveries build on previous discoveries: researchers “stand on the shoulders of giants.”
<a href="https://arxiv.org/abs/2102.13434">Carnehl and Schneider (2022)</a> embed this idea in a model of how knowledge evolves.
In their model, knowledge is the set of questions with known answers and research is the process of finding answers.
The model has three main features:</p>
<ol>
<li>Existing knowledge determines the benefits and costs of research.</li>
<li>Answering a question sheds light on related questions.</li>
<li>Researchers are free to choose which questions to ask and how intensely to seek answers.</li>
</ol>
<p>The authors first discuss the social benefit of research.
They think of society as an agent who makes policy choices.
These choices appear as questions:
How much should we tax companies?
How much should we subsidize healthcare?
Society knows the answer to some questions but is uncertain about the answer to others.
This uncertainty means society has to guess which policies are best.
Research is beneficial insofar as it leads to better guesses.
It does so through two channels:</p>
<ol>
<li>It reveals the answer to the researched questions.</li>
<li>It lowers the uncertainty around answers to other questions.</li>
</ol>
<p>Society is more certain about answers to questions that are “closer” to existing knowledge.
Intuitively, knowing how much to tax companies tells you more about taxing households than about building rockets.
Research removes more uncertainty for questions closer to those researched.
Carnehl and Schneider measure the benefit of research as the total amount of uncertainty it removes.</p>
<p>Next, the authors compare the benefits of research that “deepens” and “expands” knowledge.
They model questions as points on the real line and the “frontier” as the extremal points of existing knowledge.
Research on questions between these extremal points deepens knowledge; research on questions beyond the frontier expands knowledge.
The relative benefits of deepening and expanding depend on the gaps in existing knowledge.
Deepening is more beneficial when gaps are large.
This is because larger gaps leave more uncertainty to remove.
Splitting a large gap into smaller gaps removes more uncertainty than creating a new gap at the frontier.</p>
<p>Carnehl and Schneider then consider researchers’ choices:
What questions do they ask?
How intensely do they seek answers?
These choices depend on the private benefits and costs of research.
The authors assume private benefits equal social benefits.
They also assume private costs rise with search intensity and existing uncertainty.
More intense searches are more likely to succeed.
But, for a given likelihood, more uncertain answers need wider searches.
Carnehl and Schneider characterize researchers’ optimal choices in two dimensions:</p>
<ol>
<li>“Novelty:” how far is the chosen question from existing knowledge?</li>
<li>“Output:” how likely is the research to succeed?</li>
</ol>
<p>The relationship between novelty and output depends on whether the research expands or deepens knowledge.
If it expands knowledge, then novelty and output are substitutes: more novel research is always riskier.
If it deepens knowledge, then whether novelty and output are substitutes depends on the size of the gap being filled.
This dependence is intricate—see <a href="https://arxiv.org/abs/2102.13434">the paper</a> for details.</p>
<p>Finally, the authors use their model to study how researchers’ choices affect how knowledge evolves.
Carnehl and Schneider’s key insight is that short- and long-run choices differ.
Short-lived researchers choose questions that maximize private benefits less private costs.
But they don’t consider the impact their choices have on future researchers’ choices.
This impact arises from lowering the uncertainty for some questions but not others.
Long-lived researchers internalize the impact their choices today have on choices tomorrow.
The authors show that rewarding “moonshots”—research on questions more novel than myopically optimal—can raise the present value of future knowledge.</p>
<p>Overall, the paper is impressive.
Its introduction gives a clear summary of the main results.
The model is creative and crisp.
Like all <a href="https://bldavies.com/blog/judging-economic-models/">good models</a>, it focuses on one issue—the cumulative nature of research—and abstracts from others—e.g., the <a href="https://en.wikipedia.org/wiki/Scientific_priority">priority system</a> and career concerns.
The paper is also a rare theoretical contribution to the (mostly empirical) literature on the economics of science.</p>
<p>Carnehl and Schneider’s model could be extended to acknowledge the <a href="https://en.wikipedia.org/wiki/Replication_crisis">replication crisis</a>.
Their model assumes all research findings are certain and true.
But the crisis exists because some findings are <em>false</em>.
We discover false findings via replication studies.
These studies have <em>zero</em> novelty, but can still be beneficial: they remove uncertainty around findings we think are true.</p>
<p>Allowing for uncertain findings would then help us think about replication incentives.
Some economists argue they need to be stronger—see, e.g., <a href="https://doi.org/10.20955/wp.2015.016">Zimmerman (2015)</a>.
But whether to incentivize replication studies depends on the benefits they offer relative to original research.
If society is confident a finding is true, then replicating it may be less beneficial than producing novel findings.</p>
Truth-seekers and ideologues
https://bldavies.com/blog/truth-seekers-ideologues/
Mon, 18 Jul 2022 00:00:00 +0000https://bldavies.com/blog/truth-seekers-ideologues/<p>People learn socially: they get information from their friends.
Research on social learning takes as given that people want to learn the truth.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>
This assumption motivates worries about online misinformation: if your friends see something wrong and share it with you, then you might believe it and be wrong too.</p>
<p>But people share for more reasons than learning.
Sometimes we share to feel connected: to let each other know we’re not alone in what we see.
We enjoy having <a href="https://bldavies.com/blog/polarized-beliefs-social-networks/">like-minded friends</a> who have relatable experiences and validate ours.
But if we <em>only</em> talk to like-minded friends then it’s hard to learn the truth because no one challenges our subjective experiences of objective reality.</p>
<p>Thus, when forming social networks, we face a trade-off.
We want friends with similar experiences because they help us feel connected.
But we also want friends with <em>different</em> experiences because they help us learn the truth.
How we resolve this trade-off depends on how much we care about the truth.
If we care a lot then we should choose friends with unbiased experiences;
if we don’t care at all then we should choose friends who share our biases.</p>
<p>Here’s a basic model to illustrate.
Imagine reality is chosen by a coin toss: Heads or Tails, each with probability 0.5.
There are two types of people:</p>
<ol>
<li>“Truth-seekers” try to see the world for what it is.
But they do so noisily: their experience matches reality with probability <code>\(a>0.5\)</code>.</li>
<li>“Ideologues” always see the world the same way: they always experience Heads.</li>
</ol>
<p>These types represent two extremes:
truth-seekers have unbiased but noisy experiences, whereas ideologues have biased but precise experiences.
I choose a friend to help me win one of two games:</p>
<ol>
<li>In the “learning” game, I win if my friend’s experience matches reality.</li>
<li>In the “connecting” game, I win if my friend shares my experience.</li>
</ol>
<p>I want to maximize my chance of winning the game we play.
But I don’t know <em>which</em> we’ll play until I’ve chosen my friend.
Which type should I choose?</p>
<p>If I’m a truth-seeker then I’m better off choosing a truth-seeking friend.<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>
They’re better in the learning game because they’re more likely than ideologues to experience reality.
They’re <em>also</em> better in the connecting game because we <em>both</em> tend to experience reality.
Our pursuit for truth makes our experiences correlated.
In contrast, ideologues’ indifference to the truth makes their experience <em>uncorrelated</em> with mine.</p>
<p>Things are different if I’m an ideologue.
Then my best choice depends on how likely I am to play each game.
Let <code>\(p\)</code> be the probability I play the learning game.
I’m better off choosing a truth-seeking friend if and only if <code>\(p\)</code> exceeds<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>
<code>$$\overline{p}\equiv \frac{1}{2a}.$$</code>
Intuitively, I face a trade-off:
Truth-seekers are better in the learning game for the same reason as above.
But now <em>ideologues</em> are better in the connecting game because they always share my ideological experience.
This trade-off tilts in favor of truth-seekers as their accuracy <code>\(a\)</code> rises, lowering the threshold probability <code>\(\overline{p}\)</code>.</p>
<p>Now suppose I can choose my <em>own</em> type.
Should I be a truth-seeker or an ideologue?
Again, my choice depends on the probability <code>\(p\)</code> that I play the learning game.
It turns out I’m better off seeking truth if and only if <code>\(p\)</code> exceeds another threshold <code>\(\underline{p}\)</code> that depends on <code>\(a\)</code>.<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup>
This threshold has two interesting properties:</p>
<ol>
<li>It’s positive, so if <code>\(p\)</code> is small enough then I’m better off being an ideologue.</li>
<li>It’s smaller than <code>\(\overline{p}\)</code>, so <em>if</em> I’m better off being an ideologue then I’m <em>also</em> better off choosing an ideologue as my friend.</li>
</ol>
<p>Intuitively, if the truth doesn’t matter then there’s no point seeking it.
I might as well be an ideologue and choose ideological friends who always share my experience.</p>
<p>One can extend this model to choosing many friends with a range of accuracies and biases.
Some people might be more truth-seeking than others.
Some people might have correlated experiences because they get information from the same <a href="https://bldavies.com/blog/ideological-bias-trust-information-sources/">like-minded sources</a>.
These correlations determine the “experience portfolio” my friends can provide.
But the goal of this portfolio—whether I want it to provide truth or connection—still depends on how much I care about learning the truth.</p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>Indeed this assumption motivates the extensive literature on social learning “failures.”
These failures arise from, e.g.,
unequal influence (<a href="https://doi.org/10.1093/restud/rdr004">Acemoglu et al., 2011</a>; <a href="https://doi.org/10.1257/mic.2.1.112">Golub and Jackson, 2010</a>),
network structure (<a href="https://doi.org/10.3982/ECTA14407">Chandrasekhar et al., 2020</a>; <a href="https://doi.org/10.1145/3505156.3505163">Dasaratha and He, 2021</a>),
herding (<a href="https://doi.org/10.2307/2118364">Banerjee, 1992</a>; <a href="https://doi.org/10.1086/261849">Bikhchandani et al., 1992</a>; <a href="https://doi.org/10.1111/1468-0262.00113">Smith and Sørensen, 2000</a>),
conformity (<a href="https://doi.org/10.1007/s10670-019-00167-6">Mohseni and Williams, 2021</a>),
misinformation (<a href="https://doi.org/10.1287/mnsc.2022.4340">Mostagir and Siderius, 2022</a>),
and misinterpretation (<a href="https://doi.org/10.3982/ECTA16981">Frick et al., 2020</a>). <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>Choosing another truth-seeker makes me win the learning game with probability <code>\(a\)</code> and the connecting game with probability <code>\(a^2+(1-a)^2\)</code>.
Both of these probabilities exceed 0.5, the probability of winning either game if I choose an ideologue. <a href="#fnref:2" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:3" role="doc-endnote">
<p>If I’m an ideologue, then my <em>ex ante</em> chance of winning is <code>\(pa+0.5(1-p)\)</code> if I choose a truth-seeking friend and <code>\(0.5p+(1-p)\)</code> if I choose another ideologue. <a href="#fnref:3" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:4" role="doc-endnote">
<p>The exact probability is
<code>$$\underline{p}\equiv \frac{4a(1-a)}{2a-1+4a(1-a)}.$$</code>
It comes from comparing the truth-seeker’s indirect objective
<code>$$pa+(1-p)(a^2+(1-a)^2)$$</code>
and the ideologue’s indirect objective
<code>$$\begin{cases}pa+0.5(1-p)&\text{if}\ p\ge\overline{p}\\0.5p+(1-p)&\text{otherwise}.\end{cases}$$</code>
These functions coincide when <code>\(p\in\{\underline{p},1\}\)</code>. <a href="#fnref:4" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Gender sorting among economists
https://bldavies.com/blog/gender-sorting-economists/
Fri, 03 Jun 2022 00:00:00 +0000https://bldavies.com/blog/gender-sorting-economists/<p>I have a <a href="https://doi.org/10.1016/j.econlet.2022.110640">new paper</a> on gender sorting in economic research teams.
Here’s the abstract:</p>
<blockquote>
<p>I compare the co-authorship patterns of male and female economists, using historical data on National Bureau of Economic Research working papers.
Men tended to work in smaller teams than women, but co-authored more papers and so had more co-authors overall.
Both men and women had more same-gender co-authors than we would expect if co-authorships were random.
This was especially true for men in Macro/Finance.</p>
</blockquote>
<p>I show that the NBER co-authorship network is <a href="https://bldavies.com/blog/assortative-mixing/">assortatively mixed</a> with respect to gender, and has been since the late 1980s.
This could reflect explicit choices to work in same-gender teams.
But it could also be a consequence of other choices (e.g., which topics to research) that lead to gender sorting.
I leave this distinction open for future research.</p>
<p>The paper uses data from <a href="https://github.com/bldavies/nberwp/">nberwp</a>, an R package I’ve been working on since 2019.
I’ve described and used the package in several blog posts:</p>
<ul>
<li><a href="https://bldavies.com/blog/introducing-nberwp/">Introducing nberwp</a></li>
<li><a href="https://bldavies.com/blog/nber-co-authorships/">NBER (co-)authorships</a></li>
<li><a href="https://bldavies.com/blog/triadic-closure-nber/">Triadic closure at the NBER</a></li>
<li><a href="https://bldavies.com/blog/female-representation-collaboration-nber/">Female representation and collaboration at the NBER</a></li>
<li><a href="https://bldavies.com/blog/nberwp-cran/">nberwp is now on CRAN</a></li>
<li><a href="https://bldavies.com/blog/nberwp-1-1-0/">nberwp 1.1.0</a></li>
<li><a href="https://bldavies.com/blog/publication-outcomes-nber-working-papers">Publication outcomes of NBER working papers</a></li>
<li><a href="https://bldavies.com/blog/gender-differences-publication-rates-nber-programs/">Gender differences in publication rates within NBER programs</a></li>
</ul>
<p>The paper is in <a href="https://www.sciencedirect.com/journal/economics-letters"><em>Economics Letters</em></a>, which publishes concise papers at most 2,000 words long.
This seemed appropriate for my paper: it’s longer than a blog post but shorter than an <a href="https://www.aeaweb.org/journals/aer"><em>AER</em></a> epic.
The few words mask the many hours spent collecting and cleaning the data (e.g., manually identifying about 2,500 authors’ genders).
Such is the nature of publishing empirical work.</p>
Gender differences in publication rates within NBER programs
https://bldavies.com/blog/gender-differences-publication-rates-nber-programs/
Sat, 28 May 2022 00:00:00 +0000https://bldavies.com/blog/gender-differences-publication-rates-nber-programs/<p>My <a href="https://bldavies.com/blog/publication-outcomes-nber-working-papers/">previous post</a> showed that NBER research programs with higher female representation tend to have fewer papers published in the “Top Five” economics journals.
A reader suggested comparing Top Five publication rates among men and women <em>within</em> each program.
This comparison reveals whether men and women publish at different rates despite writing about similar topics.
Here’s the chart:</p>
<p><img src="figures/top-fives-1.svg" alt=""></p>
<p>Most points lie below the dashed diagonal line.
Such points represent programs in which male-authored papers are more likely to be in Top Fives than female-authored papers.
This “male premium” in Top Five publication rates doesn’t appear to differ between programs in the “Micro” and “Macro/Finance” subfields defined in <a href="https://doi.org/10.31235/osf.io/zeb7a">Davies (2022)</a>.
The premium is largest for the Corporate Finance (CF) program and most negative for the Development of the American Economy (DAE) program.</p>
<p>How do these patterns compare to publication rates across <em>all</em> journals?
Here’s the corresponding chart:</p>
<p><img src="figures/all-journals-1.svg" alt=""></p>
<p>Looking at all journals, rather than only Top Fives, lowers the “male premium” in publication rates.
It also reveals differences between subfields: some Micro programs have negative premia, but all Macro/Finance programs have positive premia.</p>
<p>What explains these patterns?
Here are two theories:</p>
<ol>
<li>Women submit papers to Top Fives less often.
This would be consistent with evidence that women shy away from competition relative to equally competent men (see, e.g., <a href="https://doi.org/10.1146/annurev-economics-111809-125122">Niederle and Vesterlund, 2011</a>).</li>
<li>Top Five referees and editors discriminate against women.
This would be consistent with evidence that women are held to higher editorial standards (<a href="https://doi.org/10.1093/qje/qjz035">Card et al., 2020</a>; <a href="https://ideas.repec.org/p/cam/camdae/1753.html">Hengel, 2017</a>).</li>
</ol>
<p>Unfortunately I can’t test these theories with my data.
I observe publication outcomes, but not journal submissions or referee/editor biases.
And the two theories aren’t mutually exclusive: women may submit less often <em>because</em> they anticipate discrimination.</p>
Publication outcomes of NBER working papers
https://bldavies.com/blog/publication-outcomes-nber-working-papers/
Tue, 17 May 2022 00:00:00 +0000https://bldavies.com/blog/publication-outcomes-nber-working-papers/<p>The latest version of <a href="https://github.com/bldavies/nberwp">nberwp</a> (1.2.0) contains information on where NBER working papers are published:</p>
<table>
<thead>
<tr>
<th align="left">Outlet</th>
<th align="right">Papers</th>
<th align="right">Share (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Top Five journals</td>
<td align="right">3,832</td>
<td align="right">12.7</td>
</tr>
<tr>
<td align="left">Other journals</td>
<td align="right">14,792</td>
<td align="right">49.2</td>
</tr>
<tr>
<td align="left">Book/chapters</td>
<td align="right">3,096</td>
<td align="right">10.3</td>
</tr>
<tr>
<td align="left">Unpublished</td>
<td align="right">8,363</td>
<td align="right">27.8</td>
</tr>
</tbody>
</table>
<p>About 62% of working papers are published or forthcoming in peer-reviewed journals.
One in five of these papers are in the “Top Five:” the <a href="https://www.aeaweb.org/journals/aer"><em>American Economic Review</em></a>, <a href="https://www.econometricsociety.org/publications/econometrica/browse"><em>Econometrica</em></a>, the <a href="https://www.journals.uchicago.edu/loi/jpe"><em>Journal of Political Economy</em></a>, the <a href="https://academic.oup.com/qje/issue"><em>Quarterly Journal of Economics</em></a>, and the <a href="https://academic.oup.com/restud/issue"><em>Review of Economic Studies</em></a>.
These journals are the tallest peaks in the world of economic research.
Publishing in them <a href="https://www.aeaweb.org/research/charts/publishing-promotion-economics-top-five">can be vital for career progression</a>.</p>
<p>The chart below counts papers by decade and publication outcome.
As the number of NBER working papers grew, so did the number appearing in journals and the Top Five.
Yet the space available in Top Fives was relatively constant between the 1970s and 2010s (<a href="https://doi.org/10.1257/jel.51.1.144">Card and DellaVigna, 2013</a>).
NBER working papers occupied an increasing share of that space.</p>
<p><img src="figures/decade-counts-1.svg" alt=""></p>
<p>Why are so many NBER working papers in the Top Five?
Here are four possible reasons:</p>
<ol>
<li>The NBER working paper series is among <a href="https://logec.repec.org/scripts/seriesstat.pf">the most read series</a> in economics.
More readers means more feedback, which helps authors improve their papers and make them Top Five-worthy.</li>
<li>Each paper has an NBER-affiliated author.
“Affiliates are selected through a rigorous and competitive process” (see <a href="https://www.nber.org/affiliated-scholars">here</a>).
This process may select authors more willing and able to pursue Top Five publications.</li>
<li>NBER working papers tend to apply cutting-edge methods to policy-relevant issues.
This makes papers attractive to Top Five editors, who want to publish frontier, impactful research.</li>
<li>Top Five editors tend to be NBER affiliates.
Club co-membership might help authors during peer-review.</li>
</ol>
<h2 id="gender-differences">Gender differences</h2>
<p>nberwp contains information on author genders, so we can compare the representation of women among papers with different publication outcomes.
Here’s one approach:</p>
<ol>
<li>Compute the fraction of authors on each paper who were women.</li>
<li>Sum these fractions across all papers.</li>
<li>Divide by the number of papers.</li>
</ol>
<p>These three steps deliver an estimate of the share of papers written by women.
This estimate equals 16.5% across all NBER working papers.
The chart below separates by decade and publication outcome.
Female representation grew over time, both overall and among papers published in journals.
But the growth was slower among papers published in the Top Five.
Women were consistently less represented among papers published in the Top Five than among other papers.
Overall, only 12.5% of NBER working papers in the Top Five were written by women.</p>
<p><img src="figures/female-representation-1.svg" alt=""></p>
<p>What explains the relative gender gap for papers in the Top Five?
Perhaps it reflects what men and women write about.
One way to explore this is to compare female representation and Top Five publication rates across the NBER’s <a href="https://www.nber.org/programs-projects/programs-working-groups">research programs</a>, which “correspond loosely to traditional field[s] of study within economics.”
I present that comparison in the chart below.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>
The horizontal axis measures female representation using the estimator defined above; the vertical axis measures the share of papers in each program published in the Top Five.</p>
<p><img src="figures/programs-1.svg" alt=""></p>
<p>Programs with lower female representation tend to have proportionally more papers in the Top Five.
The Monetary Economics (ME) program, which has the lowest female representation, has more papers in the Top Five than the program on Children (CH), which has the highest female representation.
Papers in the Economic Fluctuations and Growth (EFG) program tend to focus on “big picture” questions and often land in Top Fives.
Papers in the Health Economics (HE) program tend to focus on more specific questions, and often land in field or medical journals.
But papers in the HE program are about three times as likely to be written by women than are papers in the EFG program.
This difference in likelihoods contributes to lower female representation among NBER working papers published in the Top Five.</p>
<p>But <em>why</em> are the likelihoods different?
Why do proportionally fewer women write papers on growth than on children?
Perhaps this reflects what men and women enjoy researching.
But, again, publishing in the Top Five can be vital for career progression.
So, at the margin, I’d expect researchers to choose topics more likely to land in Top Five journals.
These choices do not appear in my data.
I’m interested to learn more—<a href="mailto:bldavies@stanford.edu">reach out</a> if you are too.</p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>I compare publication rates among men and women <em>within</em> each program <a href="https://bldavies.com/blog/gender-differences-publication-rates-nber-programs">here</a>. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Judging economic models
https://bldavies.com/blog/judging-economic-models/
Mon, 16 May 2022 00:00:00 +0000https://bldavies.com/blog/judging-economic-models/<p>Lots of people criticize economic models for being unrealistic.
“Humans are irrational,” they cry; “financial markets are inefficient.”
These criticisms are valid, but they miss the point.
Models aren’t <em>meant</em> to be realistic.
They’re meant to simplify reality: to focus our attention on what’s relevant and abstract from what isn’t.
<a href="https://en.wikipedia.org/wiki/All_models_are_wrong">All models are wrong</a>—we shouldn’t judge them for their realism.</p>
<p>How <em>should</em> we judge a model?
Here are two criteria:</p>
<ol>
<li>it makes predictions that agree with data;</li>
<li>it helps us think clearly about how the world works.</li>
</ol>
<p>Economists use models to generate predictions, such as “people buy less when prices rise.”
We test these predictions using data from the real world.
When the predictions and data disagree, we reject the model and search for something better.
This search leads to new models with new predictions.
Under the first criterion, “better” models make more true predictions.</p>
<p>Model predictions come in different forms.
“Within-sample” predictions tell us about data we’ve seen; “out-of-sample” predictions tell us what to expect in data we <em>haven’t</em> seen.</p>
<p>We test within-sample predictions by asking if the model “fits” the data it was designed to explain.
Bad models fail this test.
But useless models can pass it.
For example, suppose I have a list of quantity-price pairs.
I use the list as my “model” of demand.
My model fits the data because the data fit the data.
But my model says nothing about <em>why</em> people buy a given quantity at a given price.
It also says nothing about how much people buy at <em>other</em> prices.</p>
<p>Hence, we also test out-of-sample predictions.
We ask if the model fits relevant data it <em>wasn’t</em> designed to explain.
This helps us learn whether the model captures general principles rather than contextual quirks.
It also helps us be logically consistent.
For example, suppose I want to explain some pattern Y.
I write down a model in which I assume behavior X, which implies Y.
But X <em>also</em> implies pattern Z.
Do I think Z is reasonable?
Do I observe it empirically?
If not, then I should revise my model and not assume X.
Writing down the model makes my assumptions explicit and easier to correct.</p>
<p>The second criterion makes room for some models with false predictions.
The <a href="https://en.wikipedia.org/wiki/Efficient-market_hypothesis">efficient market hypothesis</a> is a good example.
It predicts that you can’t use public information to “beat the market.”
This prediction is false—<a href="https://en.wikipedia.org/wiki/Renaissance_Technologies">RenTech</a> offers one counter-example.
But the EMH helps us organize our thoughts about how, when, and why prices incorporate information.
It guides our intuitions.
It also provides a benchmark against which to compare models of <em>in</em>efficient markets.</p>
<p>Another benchmark model is that of <a href="https://bldavies.com/blog/degroot-learning-social-networks/">DeGroot learning</a>.
Its main prediction—“under some conditions, society reaches a consensus eventually”—is hard to test because “eventually” never arrives.
But the model offers a tractable (and <a href="https://doi.org/10.3982/ECTA14407">surprisingly realistic</a>) way to study how people learn.
We can enrich the model by adding <a href="https://doi.org/10.1093/qje/qjs021">homophily</a> or <a href="https://doi.org/10.1287/mnsc.2022.4340">misinformation</a>.
These additions make the model more realistic but more complex.
Having a benchmark helps us assess whether the extra realism is “worth” the extra complexity (e.g., by adding explanatory power).</p>
<p>Sometimes the realism <em>is</em> worth the complexity.
This is especially true when we use models to help us design new systems.
As <a href="https://doi.org/10.1007/978-3-030-18050-8_72">Jackson (2019)</a> notes,</p>
<blockquote>
<p>One would never design a large airliner without carefully modeling its aeronautic properties, and testing it thoroughly via simulations and test flights of prototypes, before loading it with passengers.
Why should designing a market for health insurance be any different?
Models have the virtue of offering us insight in to what should we expect in scenarios that have never been tried before.</p>
</blockquote>
<p>Models give us prototypes to test.
They let us run theoretical experiments when “real” experiments are expensive or infeasible.
They guide our search for better designs.
The “best” design might be complicated because reality is complicated.
Ignoring some complications in the model may lead us astray.
But we don’t need <em>all</em> the complications: health insurance markets don’t depend critically on whether I buy blue jeans or black.</p>
<p>Moreover, when designing new systems, the object of interest is the design rather than the model of it.
The model is just a tool.
We use it to focus on relevant factors and abstract from irrelevant factors.
Different models arise from making different choices about which factors are relevant.
Our job as economic modelers is to make those choices well.</p>
Echo chambers can be useful
https://bldavies.com/blog/echo-chambers-useful/
Fri, 08 Apr 2022 00:00:00 +0000https://bldavies.com/blog/echo-chambers-useful/<p>Talking to lots of people who know different things helps us learn.
Yet many of us sort into <a href="https://en.wikipedia.org/wiki/Echo_chamber_%28media%29">echo chambers</a>, only talking to a few like-minded people.
Doesn’t this hinder learning?</p>
<p><a href="https://scholar.google.com/scholar?cluster=15025907096771476263">Jann and Schottmüller (2021)</a> answer: “not always.”
Different people know <em>and want</em> different things.
These differences give us <a href="https://bldavies.com/blog/persuading-anecdotes">persuasion temptations</a>: we tell selective stories to influence others’ behavior.
We don’t share everything we know.
Sorting into echo chambers removes our persuasion temptations.
This leads to more sharing and learning.</p>
<p>The authors formalize this idea as follows:
Each agent has a binary <a href="https://en.wikipedia.org/wiki/Bit">bit</a> of information.
Summing these bits gives the “state.”
Agents take actions based on (i) their individual biases and (ii) their beliefs about the state.
They want other agents to take similar actions.
Biases are <a href="https://en.wikipedia.org/wiki/Common_knowledge_%28logic%29">common knowledge</a>.</p>
<p>Agents learn about the state by talking to each other.
But before they talk, agents sort into “rooms.”
Agents only talk to people in their room.
They choose what to say based on how it influences their roommates’ actions.
They either</p>
<ol>
<li>share their bit (i.e., tell the truth),</li>
<li>share one minus their bit (i.e., lie), or</li>
<li>share a zero or one randomly.</li>
</ol>
<p>Agents can “babble” by sharing a zero or one independently of their bit.
For example, they could flip a coin and share a one if it lands on heads.
Babbling is uninformative.</p>
<p>Jann and Schottmüller first study the <a href="https://en.wikipedia.org/wiki/Cheap_talk#Theorem">most informative equilibrium</a> of the bit-sharing game played in each room.
In this equilibrium, everyone tells the truth or babbles.
Agents close to the mean bias among their roommates tell the truth.
Agents far from that mean babble.</p>
<p>The authors then study how agents choose rooms.
These choices anticipate how agents talk in each room.
In equilibrium, no agent wants to change rooms on their own.
This equilibrium is “welfare-optimal” if it leads to more truth-telling than any other equilibrium.</p>
<p>Whether the equilibrium room choices are welfare-optimal depends on agents’ biases.
For example, suppose agents are polarized: their biases take one of two values with equal probability.
The difference between these values captures the level of polarization.
The authors show that</p>
<ol>
<li>full segregation is welfare-optimal if polarization is <em>high</em> enough, and</li>
<li>full integration is welfare-optimal if polarization is <em>low</em> enough.</li>
</ol>
<p>If polarization is high enough then having opposite-minded agents in the same room creates persuasion temptations.
These temptations lead to babbling in the most informative equilibrium.
Full segregation prevents babbling.
On the other hand, if polarization is <em>low</em> enough then no-one has persuasion temptations and so no-one babbles.
Then having everyone in the same room is welfare-optimal.</p>
<p>More polarization leads to less bit sharing in equilibrium.
This is because polarization creates persuasion temptations.
Segregation actually <em>removes</em> some of these temptations.
But it can’t restore communication between agents who moved to different rooms.
Thus, polarization lowers welfare <em>despite</em> segregation, rather than <em>because</em> of it.
The authors summarize this point nicely:</p>
<blockquote>
<p>“One could think of echo chambers as society’s (decentralized) defense mechanism against polarization.
Like fever in a human body, segregation occurs as the effect of an underlying problem, and its presence hence indicates that polarization is at problematic levels.
Echo chambers, and segregation more generally, are hence a symptom of polarization.
And just like artificially lowering fever, treating the symptom without addressing the cause can in fact exacerbate the situation.
Reducing polarization will weakly improve welfare; reducing segregation may not.”</p>
</blockquote>
<p>The authors go on to study extensions of their model.
For example, they show that adding public information can <a href="https://en.wikipedia.org/wiki/Crowding_out_%28economics%29">crowd out</a> incentives to tell the truth.
They also show that their model agrees with data from Twitter.
The authors suggest that social media platforms do more than connect people: they provide infrastructure for efficient segregation.</p>
<p>Jann and Schottmüller close by calling for more nuanced discussion of echo chambers:
Yes, they limit the diversity of whom we meet and talk to.
But</p>
<blockquote>
<p>“there is simply no use in meeting people with a very diverse set of opinions and very useful information, if there is no way to get that information out of them.”</p>
</blockquote>
Tolerance and compromise in social networks
https://bldavies.com/blog/tolerance-compromise-social-networks/
Fri, 01 Apr 2022 00:00:00 +0000https://bldavies.com/blog/tolerance-compromise-social-networks/<p>Everyone has beliefs about how they should behave.
But people differ in their beliefs.
They also differ in their tolerance of <em>others’</em> beliefs.
These differences affect who become friends.
Some people “stick to their guns” and befriend only those who agree.
Others are more tolerant and befriend others who disagree.
Such people are more willing to compromise, changing their behavior to accommodate friends’ beliefs.</p>
<p><a href="https://doi.org/10.1086/717041">Genicot (2022)</a> studies tolerance and compromise in social networks.
She describes a finite population of agents with different “ideal” actions.
Agents prefer taking their ideal actions.
They also prefer friends who take their ideal actions.
An agent’s “tolerance” is the largest deviation from their ideal they can accept in a friend’s action.</p>
<p>Agents take actions before making friends.
An agent “compromises” if they take an action different than their ideal.
Compromise is costly but may lead to beneficial friendships.
Agents weigh these costs and benefits when taking actions.
Genicot studies the equilibrium in which no-one wants to change their action.</p>
<p>If everyone has the same tolerance then no-one compromises.
The reason is as follows:
No-one wants to compromise more than is necessary for their friends’ acceptance.
Thus, anyone who compromises must do so “minimally” for at least one friend.
This friend must also compromise because tolerances are equal.
In fact, they must compromise <em>more</em> to make the friendship net beneficial.
But then <em>they</em> must have another friend who compromises <em>even more</em>.
We can keep applying this argument to find agents who compromise more and more, which is impossible because the population is finite.</p>
<p>Compromise thus depends on differences in tolerance.
Agents compromise by deviating from their ideals towards the ideals of relatively intolerant friends.
Some compromises are one-sided, where the intolerant friend stands their ground.
Other compromises are two-sided.
Two-sided compromises rely on intolerant “bridge” agents, who bring their tolerant friends’ actions close enough together to be mutually acceptable.</p>
<p>Compromise also depends on how tolerances and ideals covary.
If agents with “extreme” ideals are less tolerant then two-sided compromise is impossible.
This is because agents compromise towards intolerant extremists.
Consequently, actions tend to be more polarized than ideals.
In contrast, if extremists are more tolerant then agents compromise towards the median.
This makes the population more connected.</p>
<p>Genicot interprets these results in light of recent political trends.
She cites evidence of intolerance among liberals and conservatives, and of rising polarization in the United States.
These patterns are consistent with Genicot’s model.
If people want to make friends, but making friends requires compromise towards extremes, then people will behave more extremely.</p>
<p>Genicot closes with guidance on finding tolerant people:</p>
<blockquote>
<p>Looking at the identity of the members of a person’s social network may overestimate the tolerance exhibited by the person.
The distance between a person’s identity and her friends’ behaviors would likely tell us more about her tolerance.</p>
</blockquote>
<p>Tolerance isn’t about having diverse friends; it’s about <em>not forcing friends to accommodate your beliefs</em>.</p>
Persuading with anecdotes
https://bldavies.com/blog/persuading-anecdotes/
Fri, 18 Mar 2022 00:00:00 +0000https://bldavies.com/blog/persuading-anecdotes/<p>My <a href="https://bldavies.com/blog/ideological-bias-trust-information-sources/">previous post</a> explained why rational people can prefer like-minded information sources.
This preference leads media outlets to compete by targeting biased audiences.
Such targeting can take (at least) two forms:</p>
<ol>
<li>presenting content in a way some people like and others don’t;</li>
<li>only sharing content that some people like and others don’t.</li>
</ol>
<p><a href="https://www.nber.org/papers/w28661">Haghtalab et al. (2021)</a> study the second form.
They consider a pair of Bayesian agents called Sender (he) and Receiver (she).
Both agents take actions (e.g., get vaccinated) based on their beliefs about an unknown state (e.g., whether vaccines are effective) and their “moral stance.”
Sender observes some noisy signals about the state before taking his action.
He sends one of those signals to Receiver before she takes her action.
Sender’s “communication scheme” determines which signal he sends.
He chooses this scheme knowing his and Receiver’s moral stances, but before observing any signals.</p>
<p>Sender wants Receiver to take the same action as him.
He chooses the scheme that minimizes the mean distance (across signal realizations) between his and Receiver’s actions.
This distance has three components:</p>
<ol>
<li>A “signalling loss” from sending one signal rather than many;</li>
<li>A “persuasion temptation” from wanting Receiver to take the same action;</li>
<li>An unavoidable loss from differences in moral stances.</li>
</ol>
<p>If Receiver knows the communication scheme then Sender just minimizes the signalling loss.
This is because Receiver can “undo” any bias in the scheme, so persuasion is impossible.
But if Receiver <em>doesn’t</em> know the scheme then Sender trades off the signalling loss and the persuasion temptation.
This makes both agents worse off because the signal sent is less informative.</p>
<p>Suppose the signal distribution is “well-behaved” (e.g., single-peaked) and Receiver knows the communication scheme.
If Sender observes enough signals then he always prefers unbiased schemes.
Intuitively, Sender wants to send all the signals he observes but can only send one.
He sends the “most representative” signal: the one closest to the mean.
But this logic breaks down when Sender observes too few signals.
In that case, the mean signal is noisy and extreme signals can be more informative.
This can make Receiver prefer biased schemes.</p>
<p>Now suppose Receiver <em>doesn’t</em> know the communication scheme.
Then Sender chooses more biased schemes when he observes more signals.
He does so because of his persuasion temptation.
Again, this makes both agents worse off.
So Receiver prefers when Sender shares her moral stance because then their incentives are aligned.</p>
<p>This preference for like-mindedness also depends on the number and distribution of signals Sender observes.
For example, suppose Receiver chooses between</p>
<ol>
<li>an expert with many signals but a different moral stance, and</li>
<li>a layperson with one signal but the same moral stance.</li>
</ol>
<p>Receiver prefers the layperson when the signal distribution is <a href="https://en.wikipedia.org/wiki/Fat-tailed_distribution">thick-tailed</a>.
This is because the expert observes more signals “in the tail,” so they send a more extreme (and, thus, less informative) signal due to their persuasion temptation.</p>
<p>It may seem restrictive that Sender shares a raw signal rather than, say, his posterior estimate of the state.
But such sharing reflects how real people talk to each other.
Real people don’t trade summary statistics on vaccination outcomes.
Instead, they trade anecdotes like “I felt tired and achy after my booster shot.”
News outlets do the same: they typically report on individual events rather than aggregate patterns.
(When was the last time you saw a <a href="https://en.wikipedia.org/wiki/Base_rate">base rate</a> in the news?)
These anecdotes in conversation and events in the news correspond to signals in the authors’ model.</p>
Ideological bias and trust in information sources
https://bldavies.com/blog/ideological-bias-trust-information-sources/
Wed, 09 Mar 2022 00:00:00 +0000https://bldavies.com/blog/ideological-bias-trust-information-sources/<p>If people were <a href="https://en.wikipedia.org/wiki/Bayesian_inference">Bayesian</a>, then giving them more information would help them learn the truth and reach consensus.
But most people <em>aren’t</em> Bayesian.
They can have, e.g., <a href="https://en.wikipedia.org/wiki/Confirmation_bias">confirmation bias</a> or limited memory.
These cognitive “errors” can lead people with access to lots of information to disagree.</p>
<p><a href="https://web.stanford.edu/~gentzkow/research/trust.pdf">Gentzkow, Wong and Zhang (2021)</a> show that such errors are not necessary for disagreement.
The authors consider Bayesian agents with access to some information sources.
Agents don’t know which sources they can trust.
They learn to trust sources that are consistent with their personal experiences.
Variation in experiences can lead agents to disagree, even as the number of sources grows.</p>
<p>In Gentzkow et al.‘s model, sources send <a href="https://bldavies.com/blog/learning-noisy-signals">noisy signals</a> about many “states.”
States represent objective facts about different issues, such as mask effectiveness or the extent of global warming.
States vary in their “ideological valence:” how favorable they are to liberals or conservatives.
Sources vary in their accuracy (i.e., signals’ correlation with states) and ideological bias (i.e., signals’ correlation with ideological valences).
Agents want to learn sources’ accuracies and biases, which are constant across issues.</p>
<p>Agents learn by comparing signals to their personal experience, such as friends’ disease outcomes or local weather events.
Experiences, like sources, vary in their accuracy and ideological bias (due to, e.g., <a href="https://bldavies.com/blog/polarized-beliefs-social-networks/">choosing like-minded friends</a>).
However, agents believe their experience is <em>unbiased</em>.
This belief gives each agent a baseline against which to compare signals.
Different agents have different baselines, leading to different inferences from the signals they receive.</p>
<p>The authors show that biased agents prefer like-minded sources.
When comparing sources with the same accuracy but opposite biases, agents think the source sharing their bias is more accurate.
Agents also under-estimate the bias of like-minded sources and think unbiased sources are opposite-minded.
These patterns stem from agents’ dogmatic beliefs that their experiences are unbiased.
Agents learn the truth if and only if their experiences <em>really are</em> unbiased.</p>
<p>The authors also show that biases in experiences can lead to disagreements about states.
Suppose the bias in two agents’ experiences have equal magnitudes but opposite signs.
As the magnitude grows, the agents become more likely to disagree.
Having more sources doesn’t always help.
It can actually lead to <em>more</em> disagreement because agents can combine sources to construct a maximally like-minded composite.</p>
<p>This demand for like-minded sources affects media market outcomes.
People devote their attention to media outlets they trust.
Outlets profit from capturing peoples’ attention.
The authors show that monopolists maximize profit by offering accurate and unbiased information, whereas competing outlets <a href="https://en.wikipedia.org/wiki/Product_differentiation">differentiate</a> by targeting biased audiences.</p>
<p>All of these results rely on some technical assumptions.
For example, agents only see normally distributed data.
This makes the math (relatively) easy but limits generality.
I don’t mind those assumptions because they lead to clear, testable hypotheses about why people disagree.
What remains is to test them.</p>
Pre-screening evidence
https://bldavies.com/blog/pre-screening-evidence/
Wed, 02 Mar 2022 00:00:00 +0000https://bldavies.com/blog/pre-screening-evidence/<p><a href="https://doi.org/10.1016/j.jet.2021.105401">Cheng and Hsiaw (2022)</a> study an agent who wants to learn about a binary state (e.g., whether a vaccine is safe).
An information source (e.g., Fauci or a Twitter thread) sends <a href="https://bldavies.com/blog/learning-noisy-signals/">noisy signals</a> about the state.
But the agent doesn’t know whether the source is “credible.”
They have to infer credibility from the signals received.</p>
<p>The authors compare two types of agents: Bayesians and “pre-screeners.”
Both types respond to new evidence in two steps:</p>
<ol>
<li>Update beliefs about whether the source is credible.</li>
<li>Update beliefs about the state, weighing the evidence by its credibility.</li>
</ol>
<p>The two types differ in the second step.
Whereas Bayesians use their <em>prior</em> beliefs about credibility, pre-screeners use their <em>updated</em> beliefs.
Pre-screeners “double-dip” the evidence: once to evaluate its credibility, and again to evaluate its likelihood <em>given</em> its credibility.
Bayesians never double-dip: they evaluate credibility and likelihood independently.</p>
<p>Bayesians and pre-screeners can have different responses to the same evidence.
For example, suppose an agent thinks (i) they have COVID-19 and (ii) their testing procedure is accurate (i.e., credible).
The procedure <em>is</em> accurate, but they actually <em>don’t</em> have the virus.
They take a test; it returns “negative.”
Surprised, they take another test; “negative.”
They keep taking tests; the tests keep returning “negative.”</p>
<p>Suppose the agent is Bayesian.
The first result makes them think the testing procedure is inaccurate.
But they evaluate the first result using their <em>prior</em> belief about accuracy, which is that the procedure <em>is</em> accurate.
Consequently, they weaken their belief in having the virus.
This makes the second result less surprising than the first.
That result weakens the agent’s belief further.
Eventually, the agent stops being surprised: they gradually learn the procedure is accurate and they don’t have the virus.</p>
<p>Now suppose the agent pre-screens.
The first result makes them think the testing procedure is inaccurate.
They evaluate the first result using their <em>updated</em> belief about accuracy.
Consequently, they <em>strengthen</em> their belief in having the virus.
This makes the second result less surprising than the first: the agent expects an inaccurate result and, from their perspective, gets one.
They strengthen their belief further.
Eventually, the agent wonders, “if the procedure was inaccurate then it wouldn’t keep returning the same result.
Perhaps it <em>is</em> accurate after all?”
The agent then evaluates <em>all</em> the results as though the procedure is accurate, weakening their belief in having the virus sharply.
Suddenly, the agent learns the procedure is accurate and they don’t have the virus.</p>
<p>The Bayesian and pre-screener reach the same conclusion in different ways.
The Bayesian learns gradually because they evaluate each result independently.
The pre-screener learns suddenly because they evaluate <em>the entire history of results</em> as though they knew the testing procedure was accurate all along.
Cheng and Hsiaw show generally that, so long as signals tend to agree with the state, pre-screeners learn the truth eventually if Bayesians do too.</p>
<p>But “eventually” can mean “after an unhelpfully long time.”
In the meantime, pre-screeners and Bayesians can disagree about the state.
They do so because they disagree about credibility.
Pre-screeners either “over-trust” or “under-trust” the source relative to Bayesians.
Over-trust leads pre-screeners to think the state favored by the evidence is more likely than Bayesians think.
Under-trust has the opposite effect.</p>
<p>Cheng and Hsiaw call this pattern “correlated disagreement:” pre-screeners’ beliefs tend to agree with whether they think sources supporting those beliefs are credible.
For example, imagine collecting peoples’ opinions on (i) whether vaccines are safe and (ii) the credibility of sources saying vaccines are safe.
If people pre-screen then their opinions on (i) and (ii) should be positively correlated.</p>
<p>Correlated disagreement is one testable prediction of Cheng and Hsiaw’s model.
Another prediction is “first impression bias:” pre-screeners are more likely to think a source is credible if its first few signals agree with each other.
Bayesians have no such bias because their final beliefs don’t depend on which signals they see first.</p>
<p>A third prediction concerns how pre-screeners react to new evidence.
They over-react if the evidence confirms their priors and think the source is credible.
They under-react if the evidence contradicts their priors <em>or</em> think the source is <em>not</em> credible.</p>
<p>Cheng and Hsiaw also discuss how such reactions influence asset prices.
Disagreements over credibility (e.g., of financial reports) lead to disagreements over fundamental values.
These disagreements lead to speculation: people buy assets hoping to cash-in on others’ over-confidence.
Speculation raises asset prices.
Eventually, disagreements over credibility disappear, investors wise up, and prices come crashing down.</p>
<p>Cheng and Hsiaw argue that disagreement comes from people not being Bayesian.
In contrast, <a href="https://web.stanford.edu/~gentzkow/research/trust.pdf">Gentzkow, Wong and Zhang (2021)</a> argue that disagreement can arise even if people <em>are</em> Bayesian.
I summarize their argument <a href="https://bldavies.com/blog/ideological-bias-trust-information-sources">here</a>.</p>
Communicating science
https://bldavies.com/blog/communicating-science/
Wed, 23 Feb 2022 00:00:00 +0000https://bldavies.com/blog/communicating-science/<p>Science is hard, and communicating it to a broad audience is even harder.
I don’t envy Anthony Fauci or his colleagues, who must summarize the science on vaccines to a range of parties with a range of prior beliefs.</p>
<p>What does it mean to communicate science “optimally?”
<a href="https://doi.org/10.3982/ECTA18155">Andrews and Shapiro (2021)</a> offer some guidance.
They consider an analyst who sends an audience a report about some data.
Audience members vary in their beliefs and objectives, and so vary in their reactions to a given report.
The analyst chooses a report that maximizes audience members’ welfare given their reactions.</p>
<p>Andrews and Shapiro compare two models:</p>
<ol>
<li>In the “communication model,” the analyst provides information and lets audience members take their preferred decision <em>given</em> that information.</li>
<li>In the “decision model,” the analyst takes a decision on audience members’ behalf.</li>
</ol>
<p>These two models generally have different optimal reporting rules.
For example, suppose the analyst has experimental data on a new drug.
Their audience is a range of governments, who want to subsidize the drug if its effect is positive and tax it if its effect is negative.
Everyone knows the true effect is non-negative, so taxing is never optimal.
But the analyst may estimate a negative effect due to sampling error in the experiment.
Under the decision model, the analyst optimally censors negative estimates because imposing a tax is worse than doing nothing.
Conversely, under the communication model, censoring is <em>never</em> optimal because it throws away information about effect size.</p>
<p>In this example, the analyst optimally reports a <a href="https://en.wikipedia.org/wiki/Sufficient_statistic">sufficient statistic</a> for the effect size (e.g., the mean outcomes within the experiment’s treatment and control groups).
In fact, reporting a sufficient statistic is <em>always</em> optimal under the communication model.</p>
<p>The communication and decision models can even have different <a href="https://en.wikipedia.org/wiki/Admissible_decision_rule">admissible</a> reporting rules.
For example, suppose the analyst has data on (true) treatment effects for many drugs.
Their audience is a range of physicians, who want to give the best drug to their patients.
Every physician believes that any drug is better than none (e.g., because patients can’t self-prescribe).
The analyst considers two reporting rules:</p>
<ol>
<li>Choose randomly among the drugs with the largest effect.</li>
<li>If all drugs have the same effect then do nothing; otherwise, use rule 1.</li>
</ol>
<p>Every physician prefers <em>some</em> drug to none, so doing nothing is never optimal.
Consequently, the first rule always dominates the second under the decision model.
But physicians can reconstruct the first rule from the second, so the second rule is (weakly) more informative.
Consequently, it always dominates the first rule under the communication model.</p>
<p>Andrews and Shapiro discuss more features of the two models, such as what happens when the analyst puts different weights on different audience members’ welfare.
The authors also discuss implications of their analysis for research practice, such as for reporting estimates of structural economic models.</p>
<p>One thing Andrews and Shapiro <em>don’t</em> discuss is what happens when the audience is <a href="https://en.wikipedia.org/wiki/Bounded_rationality">boundedly rational</a>.
Audience members may find it hard to process information—hence getting the analyst to process it for them—due to cognitive or emotional costs.
Such costs make the audience <a href="https://en.wikipedia.org/wiki/Rational_inattention">rationally inattentive</a>.
<a href="https://scholar.google.com/scholar?cluster=10141712202393797072">Bloedel and Segal (2021)</a> study optimal communication to a rationally inattentive audience, but use the language of Bayesian persuasion (<a href="https://doi.org/10.1257/aer.101.6.2590">Kamenica and Gentzkow, 2011</a>) rather than statistical decision theory.</p>
<p>Another missing discussion is what happens when the audience don’t trust the analyst.
Suppose some audience members believe the analyst lies or suppresses truths for conspiratorial reasons.
How should the analyst respond to this belief?
How should they trade off the cognitive costs induced by providing information with the conspiracy theories induced by suppressing it?
This trade-off is both deliciously complicated and faced by real-world science communicators.
Again, I do not envy them!</p>
Assortativity and correlation coefficients
https://bldavies.com/blog/assortativity-correlation-coefficients/
Thu, 17 Feb 2022 00:00:00 +0000https://bldavies.com/blog/assortativity-correlation-coefficients/<p>This is a technical follow-up to a previous post on <a href="https://bldavies.com/blog/assortative-mixing/">assortative mixing in networks</a>.
In a <a href="https://bldavies.com/blog/assortative-mixing/#fn:1">footnote</a>, I claimed that <a href="https://doi.org/10.1103/PhysRevE.67.026126">Newman’s (2003)</a> assortativity coefficient equals the Pearson correlation coefficient when there are two possible node types.
This post proves that claim.</p>
<h2 id="notation">Notation</h2>
<p>Consider an undirected network <code>\(N\)</code> in which each node has a type belonging to a (finite) set <code>\(T\)</code>.
The assortativity coefficient is defined as
<code>$$r=\frac{\sum_{t\in T}x_{tt}-\sum_{t\in T}y_t^2}{1-\sum_{t\in T}y_t^2},$$</code>
where <code>\(x_{st}\)</code> is the proportion of edges joining nodes of type <code>\(s\)</code> to nodes of type <code>\(t\)</code>, and where
<code>$$y_t=\sum_{s\in T}x_{st}$$</code>
is the proportion of edges incident with nodes of type <code>\(t\)</code>.
The Pearson correlation of adjacent nodes’ types is given by
<code>$$\DeclareMathOperator{\Cov}{Cov} \DeclareMathOperator{\Var}{Var} \rho=\frac{\Cov(t_i,t_j)}{\sqrt{\Var(t_i)\Var(t_j)}},$$</code>
where <code>\(t_i\in T\)</code> and <code>\(t_j\in T\)</code> are the types of nodes <code>\(i\)</code> and <code>\(j\)</code>, and where (co)variances are computed with respect to the frequency at which nodes of type <code>\(t_i\)</code> and <code>\(t_j\)</code> are adjacent in <code>\(N\)</code>.</p>
<h2 id="proof">Proof</h2>
<p>Let <code>\(T=\{a,b\}\subset\mathbb{R}\)</code> with <code>\(a\not=b\)</code>.
I show that the correlation coefficient <code>\(\rho\)</code> and assortativity coefficient <code>\(r\)</code> can be expressed as the same function of <code>\(y_a\)</code> and <code>\(x_{ab}\)</code>, implying <code>\(\rho=r\)</code>.</p>
<p>Consider <code>\(\rho\)</code>.
It can be understood by presenting the <a href="#appendix-constructing-the-mixing-matrix">mixing matrix</a> <code>\(X=(x_{st})\)</code> in tabular form:</p>
<table>
<thead>
<tr>
<th><code>\(t_i\)</code></th>
<th><code>\(t_j\)</code></th>
<th><code>\(x_{t_it_j}\)</code></th>
</tr>
</thead>
<tbody>
<tr>
<td><code>\(a\)</code></td>
<td><code>\(a\)</code></td>
<td><code>\(x_{aa}\)</code></td>
</tr>
<tr>
<td><code>\(a\)</code></td>
<td><code>\(b\)</code></td>
<td><code>\(x_{ab}\)</code></td>
</tr>
<tr>
<td><code>\(b\)</code></td>
<td><code>\(a\)</code></td>
<td><code>\(x_{ba}\)</code></td>
</tr>
<tr>
<td><code>\(b\)</code></td>
<td><code>\(b\)</code></td>
<td><code>\(x_{bb}\)</code></td>
</tr>
</tbody>
</table>
<p>The first two columns enumerate the possible type pairs <code>\((t_i,t_j)\)</code> and the third column stores the proportion of adjacent node pairs <code>\((i,j)\)</code> with each type pair.
This third column defines the joint distribution of types across adjacent nodes.
Thus <code>\(\rho\)</code> equals the correlation of the first two columns, weighted by the third column.
(Here <code>\(x_{ab}=x_{ba}\)</code> since <code>\(N\)</code> is undirected.)
Now <code>\(t_i\)</code> has mean
<code>$$\DeclareMathOperator{\E}{E} \begin{aligned} \E[t_i] &= x_{aa}a+x_{ab}a+x_{ba}b+x_{bb}b \\ &= y_aa+y_bb \end{aligned}$$</code>
and second moment
<code>$$\begin{aligned} \E[t_i^2] &= x_{aa}a^2+x_{ab}a^2+x_{ba}b^2+x_{bb}b^2 \\ &= y_aa^2+y_bb^2, \end{aligned}$$</code>
and similar calculations reveal <code>\(\E[t_j]=\E[t_i]\)</code> and <code>\(\E[t_j^2]=\E[t_i^2]\)</code>.
Thus <code>\(t_i\)</code> has variance
<code>$$\begin{aligned} \Var(t_i) &= \E[t_i^2]-\E[t_i]^2 \\ &= y_aa^2+y_bb^2-(y_aa+y_bb)^2 \\ &= y_a(1-y_a)a^2+y_b(1-y_b)b^2-2y_ay_bab \end{aligned}$$</code>
and similarly <code>\(\Var(t_j)=\Var(t_i)\)</code>.
We can simplify this expression for the variance by noticing that
<code>$$x_{aa}+x_{ab}+x_{ba}+x_{bb}=1,$$</code>
which implies
<code>$$\begin{aligned} y_b &= x_{ab}+x_{bb} \\ &= 1-x_{aa}-x_{ba} \\ &= 1-y_a \end{aligned}$$</code>
and therefore
<code>$$\begin{aligned} \Var(t_i) &= y_a(1-y_a)a^2+(1-y_a)y_ab^2-2y_a(1-y_a)ab \\ &= y_a(1-y_a)(a-b)^2. \end{aligned}$$</code>
We next express the covariance <code>\(\Cov(t_i,t_j)=\E[t_it_j]-\E[t_i]\E[t_j]\)</code> in terms of <code>\(y_a\)</code> and <code>\(x_{ab}\)</code>.
Now
<code>$$\begin{aligned} \E[t_it_j] &= x_{aa}a^2+x_{ab}ab+x_{ba}ab+x_{bb}b^2 \\ &= (y_a-x_{ab})a^2+2x_{ab}ab+(y_b-x_{ab})b^2 \\ &= y_aa^2+y_bb^2-x_{ab}(a-b)^2 \end{aligned}$$</code>
because <code>\(x_{ab}=x_{ba}\)</code>.
It follows that
<code>$$\begin{aligned} \Cov(t_i,t_j) &= y_aa^2+y_bb^2-x_{ab}(a-b)^2-(y_aa+y_bb)^2 \\ &= y_a(1-y_a)a^2+y_b(1-y_b)b^2-2y_ay_bab-x_{ab}(a-b)^2 \\ &= y_a(1-y_a)(a-b)^2-x_{ab}(a-b)^2, \end{aligned}$$</code>
where the last line uses the fact that <code>\(y_b=1-y_a\)</code>.
Putting everything together, we have
<code>$$\begin{aligned} \rho &= \frac{\Cov(t_i,t_j)}{\sqrt{\Var(t_i)\Var(t_j)}} \\ &= \frac{y_a(1-y_a)-x_{ab}}{y_a(1-y_a)}, \end{aligned}$$</code>
a function of <code>\(y_a\)</code> and <code>\(x_{ab}\)</code>.</p>
<p>Now consider <code>\(r\)</code>.
Its numerator equals
<code>$$\begin{aligned} \sum_{t\in T}x_{tt}-\sum_{t\in T}y_t^2 &= x_{aa}+x_{bb}-y_a^2-y_b^2 \\ &= (y_a-x_{ab})+(y_b-x_{ab})-y_a^2-y_b^2 \\ &= y_a(1-y_a)+y_b(1-y_b)-2x_{ab} \\ &\overset{\star}{=} 2y_a(1-y_a)-2x_{ab} \end{aligned}$$</code>
and its denominator equals
<code>$$\begin{aligned} 1-\sum_{t\in T}y_t^2 &= 1-y_a^2-y_b^2 \\ &\overset{\star\star}{=} 1-y_a^2-(1-y_a)^2 \\ &= 2y_a(1-y_a), \end{aligned}$$</code>
where <code>\(\star\)</code> and <code>\(\star\star\)</code> both use the fact that <code>\(y_b=1-y_a\)</code>.
Thus
<code>$$r=\frac{y_a(1-y_a)-x_{ab}}{y_a(1-y_a)},$$</code>
the same function of <code>\(y_a\)</code> and <code>\(x_{ab}\)</code>, and so <code>\(\rho=r\)</code> as claimed.</p>
<p>Writing <code>\(\rho=r\)</code> in terms of <code>\(y_a\)</code> and <code>\(x_{ab}\)</code> makes it easy to check the boundary cases:
if there are no within-type edges then <code>\(y_a=x_{ab}=1/2\)</code> and so <code>\(\rho=r=-1\)</code>;
if there are no between-type edges then <code>\(x_{ab}=0\)</code> and so <code>\(\rho=r=1\)</code>.</p>
<h2 id="appendix-constructing-the-mixing-matrix">Appendix: Constructing the mixing matrix</h2>
<p>The proof relies on noticing that <code>\(x_{ab}=x_{ba}\)</code>, which comes from undirectedness of the network <code>\(N\)</code> and from how the mixing matrix <code>\(X\)</code> is constructed.
I often forget this construction, so here’s a simple algorithm:
Consider some type pair <code>\((s,t)\)</code>.
Look at the edges beginning at type <code>\(s\)</code> nodes and count how many end at type <code>\(t\)</code> nodes.
Call this count <code>\(m_{st}\)</code>.
Do the same for all type pairs to obtain a matrix <code>\(M=(m_{st})\)</code> of edge counts.
Divide the entries in <code>\(M\)</code> by their sum to obtain <code>\(X\)</code>.</p>
nberwp 1.1.0
https://bldavies.com/blog/nberwp-1-1-0/
Fri, 21 Jan 2022 00:00:00 +0000https://bldavies.com/blog/nberwp-1-1-0/<p>A new version of <a href="https://github.com/bldavies/nberwp">nberwp</a>, an R package containing data on <a href="https://www.nber.org/papers">NBER working papers</a>, is <a href="https://cran.r-project.org/package=nberwp">available on CRAN</a>.
This version adds information about (i) papers published in July–December 2021 and (ii) author sexes.</p>
<h2 id="papers-from-late-2021">Papers from late 2021</h2>
<p>The second half of 2021 saw 649 new NBER working papers by 1,663 unique authors, 503 of whom had not published in the series previously.
Those counts were down (from 858, 2,094, and 683, respectively) from the second half of 2020, but roughly in-line with pre-pandemic trends:</p>
<p><img src="figures/monthly-papers-1.svg" alt=""></p>
<p>nberwp 1.1.0 also corrects some <a href="https://bldavies.com/blog/nber-co-authorships/">false merges and splits</a> among authors who published <em>before</em> July 2021.
These corrections lowered the number of such authors from 15,437 in version 1.0.0 to 15,430 in version 1.1.0.</p>
<h2 id="author-sexes">Author sexes</h2>
<p>nberwp 1.1.0 adds information about author sexes, allowing one to, e.g., visualize the growing <a href="https://bldavies.com/blog/female-representation-collaboration-nber/">female representation</a> among NBER working paper authors:</p>
<p><img src="figures/female-representation-1.svg" alt=""></p>
<p>I obtain sex information by matching authors’ names with baby name and Facebook data, and through manual identification.
I document my matching and manual procedures in “<a href="https://doi.org/10.31235/osf.io/zeb7a">Sex-based sorting among economists: Evidence from the NBER</a>,” a new paper comparing males’ and females’ co-authorship patterns.</p>
Hypothesis tests and Bayesian reasoning
https://bldavies.com/blog/hypothesis-tests-bayesian-reasoning/
Thu, 06 Jan 2022 00:00:00 +0000https://bldavies.com/blog/hypothesis-tests-bayesian-reasoning/<p>Most empirical research relies on <a href="https://en.wikipedia.org/wiki/Statistical_hypothesis_testing">hypothesis testing</a>.
We form null and alternative hypotheses (e.g., a regression coefficient equals zero or doesn’t), collect some data, and reject the null if it implies those data are rare enough.
How rare is “enough” depends on the context, but a common rule is to reject the null if the <a href="https://en.wikipedia.org/wiki/P-value">p-value</a>—that is, the probability of observing the same or rarer data <em>given the null is true</em>—is smaller than 0.05.
However, this rule can lead to very different conclusions than <a href="https://en.wikipedia.org/wiki/Bayes%27_theorem">Bayesian</a> reasoning.</p>
<p>For example, suppose I’m the government trying to collect taxes.
I know 1% of taxpayers cheat (e.g., by under-reporting their income), so I hire an auditor to detect cheating.
The auditor makes occasional mistakes: they incorrectly detect cheating among 2% of non-cheaters.
But the auditor never fails to detect cheating when it happens.</p>
<p>Suppose the auditor tells me Joe cheated on his taxes.
Should I prosecute him for fraud?
Letting “Joe is innocent” be the null hypothesis and “Joe is guilty” be the alternative, the p-value of the auditor’s message is simply their <a href="https://en.wikipedia.org/wiki/False_positives_and_false_negatives">false positive</a> rate: 0.02.
This p-value is smaller than the 0.05 “critical value” below which I reject nulls, so I take the auditor’s message as strong evidence of guilt.</p>
<p>Now consider a random sample of a thousand taxpayers.
The auditor accuses all ten cheaters in this sample of cheating.
But the auditor also accuses 20 of the 990 <em>non-cheaters</em> of cheating.
So only one in three accusees actually cheated—if I thought everyone like Joe was guilty, I would be wrong two thirds of the time!
That’s hardly evidence of guilt “beyond reasonable doubt.”</p>
<p>What’s going on?
Why does the hypothesis test suggest Joe is guilty, when simply counting true and false accusations suggests he’s innocent?</p>
<p>The suggestions differ because they are based on different probabilities.
The hypothesis test uses the probability that the auditor detects Joe cheating <em>given he is innocent</em>: 0.02.
But the counting argument uses the probability that Joe is innocent <em>given the auditor detects him cheating</em>: 0.66.
(Notice the swap in what comes before and after “given.")</p>
<p>But which probability should I use?
Should I follow my hypothesis test and prosecute Joe, or should I follow my counting argument and let him walk free?</p>
<p>One problem with the hypothesis test is that it ignores the <a href="https://en.wikipedia.org/wiki/Base_rate">base rate</a>: most taxpayers are innocent.
Sure, false accusations are rare, but there are lots of non-cheaters to falsely accuse!
These false accusations crowd out the true accusations, which are relatively rare because cheating is rare.</p>
<p>In contrast, counting accusees effectively takes the base rate as a <a href="https://en.wikipedia.org/wiki/Prior_probability">prior belief</a> in Joe’s innocence and updates this belief in response to the evidence provided by the auditor.
My belief updates a lot—from 0.99 to 0.66—but not enough to indict Joe beyond reasonable doubt.
The auditor’s <a href="https://bldavies.com/blog/learning-noisy-signals/">signal is too noisy</a> to establish guilt on its own.
(One way to combat this noise is to hire a <em>second</em> auditor, identical to but independent of the first.
If <em>both</em> auditors told me Joe cheated then my belief in his innocence would fall to 0.04, which would be much stronger grounds for prosecution.)</p>
<p>However, things can change if my prior belief is incorrect.
For example, suppose I think 10% of taxpayers cheat, ten times as many as <em>actually</em> cheat.
When the auditor tells me Joe cheated, <a href="https://en.wikipedia.org/wiki/Bayes%27_theorem">Bayes’ formula</a> tells me to update my belief in Joe’s innocence from 0.9 to 0.15, which is plausible grounds for prosecution.
Now accusee-counting <em>agrees</em> with my hypothesis test, even though my evidence didn’t change.
This sensitivity to prior beliefs—which may be incorrect, or may not even exist—is a common criticism of <a href="https://en.wikipedia.org/wiki/Bayesian_inference">Bayesian inference</a>.</p>
<p>But I like the Bayesian approach.
It forces me to remember that data are noisy: the auditor makes mistakes, as do the tools I use to observe and catch data in the wild.
This noisiness affects how I should interpret data as evidence of how the world works.
Bayesian reasoning also forces me to specify my priors—<a href="https://bldavies.com/blog/lessons-dave-mare/#have-weak-priors-and-strong-nulls">they’re probably wrong</a>, but specifying them encourages me to think about <em>why</em> they’re wrong (and, hopefully, work to make them <em>less</em> wrong).</p>
<p>I won’t go decrying hypothesis tests any time soon: they’re well-established as the dominant tool in empirical economics, not least because they’re easier to describe and interpret than Bayesian arguments.
But I’ll try to “be more Bayesian” generally: to think more carefully about my beliefs, about evidence, and how my beliefs respond to evidence.</p>
<hr>
<p><em>Thanks to Anirudh Sankar for reading a draft version of this post.
It was inspired by the tenth chapter of Jordan Ellenberg’s <a href="http://www.jordanellenberg.com/book/how-not-to-be-wrong/"><em>How Not to Be Wrong</em></a>.</em></p>
Stable matchings with correlated preferences
https://bldavies.com/blog/stable-matchings-correlated-preferences/
Fri, 19 Nov 2021 00:00:00 +0000https://bldavies.com/blog/stable-matchings-correlated-preferences/<p>Suppose I use the <a href="https://en.wikipedia.org/wiki/Gale%E2%80%93Shapley_algorithm">Gale-Shapley (GS) algorithm</a> to find a <a href="https://bldavies.com/blog/stable-matchings/">stable matching</a> between two sets <code>\(P\)</code> and <code>\(R\)</code> of size <code>\(n\)</code>.
Proposer <code>\(p\in P\)</code> gets utility
<code>$$u_{rp}=\alpha w_r+(1-\alpha)x_{rp}$$</code>
from being matched with reviewer <code>\(r\in R\)</code>, where <code>\(w_r\)</code> is common to all proposers, <code>\(x_{rp}\)</code> is specific to proposer <code>\(p\)</code>, and <code>\(\alpha\in[0,1]\)</code> controls the correlation in utilities across proposers.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>
Likewise, reviewer <code>\(r\)</code> gets utility
<code>$$v_{pr}=\beta y_p+(1-\beta)z_{pr}$$</code>
from being matched with proposer <code>\(p\)</code>, where <code>\(y_p\)</code> is common to all reviewers, <code>\(z_{pr}\)</code> is specific to reviewer <code>\(r\)</code>, and <code>\(\beta\in[0,1]\)</code> controls the correlation in utilities across reviewers.
The <code>\(w_r\)</code>, <code>\(x_{rp}\)</code>, <code>\(y_p\)</code>, and <code>\(z_{pr}\)</code> are iid standard normal.
I run the GS algorithm 200 times, each time (i) simulating new utility realizations and (ii) computing the means
<code>$$U\equiv\frac{1}{n}\sum_{p\in P}u_{rp}$$</code>
and
<code>$$V\equiv\frac{1}{n}\sum_{r\in R}v_{pr}$$</code>
of utilities under the resulting matching.
I then compute the <a href="https://en.wikipedia.org/wiki/Grand_mean">grand means</a> of <code>\(U\)</code> and <code>\(V\)</code> across all 200 simulations.
The chart below shows how these grand means vary with <code>\(\alpha\)</code> and <code>\(\beta\)</code> when <code>\(n=50\)</code>.</p>
<p><img src="figures/plot-1.svg" alt=""></p>
<p>Proposers and reviewers tend to be better off when (i) utilities on their side of the market are <em>less</em> correlated and (ii) utilities on the <em>other</em> side of the market are <em>more</em> correlated.
Intuitively, same-side correlations induce competition that makes the most desirable people on that side better off but the rest much worse off.
This competition benefits the other side of the market because it gives people on that side more power to choose “winners” according to their preferences.</p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>If <code>\(\mathrm{Var}(w_r)=\sigma_w^2\)</code> and <code>\(\mathrm{Var}(x_{rp})=\sigma_x^2\)</code> then <code>\(\mathrm{Corr}(u_{rp},u_{rq})=[1+(1-\alpha)^2\sigma_x^2/\alpha^2\sigma_w^2]^{-1}\)</code> increases with <code>\(\alpha\)</code>. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Learning from noisy signals
https://bldavies.com/blog/learning-noisy-signals/
Sat, 23 Oct 2021 00:00:00 +0000https://bldavies.com/blog/learning-noisy-signals/<p>Suppose I want to learn the value of <code>\(\omega\in\{0,1\}\)</code>.
I observe a sequence of iid signals <code>\((s_n)_{n\ge1}\)</code> with
<code>$$\Pr(s_n=0\,\vert\,\omega=0)=1-\alpha$$</code>
and
<code>$$\Pr(s_n=1\,\vert\,\omega=1)=1-\beta,$$</code>
where <code>\(\alpha\)</code> and <code>\(\beta\)</code> are <a href="https://en.wikipedia.org/wiki/Type_I_and_type_II_errors">false positive and false negative rates</a>.
I let <code>\(\pi_n\)</code> denote my belief that <code>\(\omega=1\)</code> after observing <code>\(n\)</code> signals, and update this belief sequentially via <a href="https://en.wikipedia.org/wiki/Bayes%27_theorem">Bayes’ formula</a>:
<code>$$\pi_{n}(s)=\frac{\Pr(s_n=s\,\vert\,\omega=1)\pi_{n-1}}{\Pr(s_n=s)}.$$</code>
In particular, if I observe <code>\(s_n=0\)</code> then I update my belief to
<code>$$\pi_n(0)=\frac{\beta\pi_{n-1}}{\beta\pi_{n-1}+(1-\alpha)(1-\pi_{n-1})},$$</code>
whereas if I observe <code>\(s_n=1\)</code> then I update my belief to
<code>$$\pi_n(1)=\frac{(1-\beta)\pi_{n-1}}{(1-\beta)\pi_{n-1}+\alpha(1-\pi_{n-1})}.$$</code></p>
<p>The chart below shows how my belief <code>\(\pi_n\)</code> changes with <code>\(n\)</code>.
Each path in the chart corresponds to the sequence of beliefs <code>\((\pi_0,\pi_1,\ldots,\pi_{100})\)</code> obtained by updating my initial belief <code>\(\pi_0=0.5\)</code> in response to a signal sequence <code>\((s_1,s_2,\ldots,s_{100})\)</code>.
I simulate 10 such sequences, fixing <code>\(\omega=1\)</code> and <code>\(\alpha=0.4\)</code> but varying <code>\(\beta\in\{0.2,0.4,0.6,0.8\}\)</code>.</p>
<p><img src="figures/paths-1.svg" alt=""></p>
<p>If <code>\(\beta\not=0.6\)</code> then my belief converges to <code>\(\pi_n=1\)</code> as <code>\(n\)</code> grows.
However, if <code>\(\beta=0.6\)</code> then <code>\(\pi_n=\pi_0\)</code> for each <code>\(n\)</code>; that is, I never update my beliefs regardless of the signals I observe.
This is because if <code>\(\alpha+\beta=1\)</code> then <code>\(\Pr(s_n=s\cap\omega=1)=\Pr(s_n=s)\)</code> for each <code>\(s\in\{0,1\}\)</code>, and so signals are uninformative because they are independent of <code>\(\omega\)</code>.</p>
<p>The chart below plots the mean of my beliefs <code>\(\pi_n\)</code> across 1,000 realizations of the signals simulated above.
Again, I fix <code>\(\omega=1\)</code> and the false positive rate <code>\(\alpha=0.4\)</code> but vary the false negative rate <code>\(\beta\in\{0.2,0.4,0.6,0.8\}\)</code>.
Higher values of <code>\(\beta\)</code> are not always worse: my belief converges to the truth faster when <code>\(\beta=0.8\)</code> than when <code>\(\beta=0.4\)</code>.
Intuitively, if I know the false negative rate is close to 100% then observing a signal <code>\(s_n=0\)</code> gives me strong evidence that <code>\(\omega=1\)</code>.</p>
<p><img src="figures/means-1.svg" alt=""></p>
A competition model of corruption
https://bldavies.com/blog/competition-model-corruption/
Tue, 12 Oct 2021 00:00:00 +0000https://bldavies.com/blog/competition-model-corruption/<p>This post presents a simple model of corruption in two-party elections.
The model is similar to one of <a href="https://en.wikipedia.org/wiki/Cournot_competition">Cournot competition</a>: parties choose quantities of corruption in response to implicit “price” schedules determined by voter preferences.
I describe the <a href="#model">model</a>, <a href="#equilibrium">derive</a> and <a href="#comparative-statics">analyze</a> its equilibrium, provide a <a href="#numerical-example">numerical example</a>, and discuss some <a href="#alternative-models">alternatives</a>.</p>
<h2 id="model">Model</h2>
<p>Two parties <code>\(A\)</code> and <code>\(B\)</code> compete for votes in an electoral system with <a href="https://en.wikipedia.org/wiki/Proportional_representation">proportional representation</a>.
Each party <code>\(k\)</code> chooses its corruption level <code>\(c_k\)</code> to maximize <code>\(c_ks_k(c_A,c_B)\)</code>, where party <code>\(k\)</code>'s vote share <code>\(s_k(c_A,c_B)\)</code> depends on both parties’ chosen corruptions.
This objective captures how parties benefit from engaging in corrupt activities, but only insofar as voters give them power to do so.</p>
<p>Voters don’t like corruption: voter <code>\(i\)</code>'s payoff from voting for <code>\(A\)</code> is <code>\((1-c_A+\epsilon_i)\)</code> and their payoff from voting for <code>\(B\)</code> is <code>\((1-c_B)\)</code>.
The <code>\(\epsilon_i\)</code> are iid uniformly distributed on <code>\([b-w,b+w]\)</code>, where <code>\(b\)</code> is the mean bias in favor of party <code>\(A\)</code> and <code>\(w>0\)</code> controls the noise in voter preferences.
Thus, party <code>\(A\)</code>'s vote share is
<code>$$\begin{aligned} s_A(c_A,c_B) &= \Pr(1-c_A+\epsilon_i\ge1-c_B) \\ &= \Pr(\epsilon_i\ge c_A-c_B) \\ &= \frac{b+w-(c_A-c_B)}{(b+w)-(b-w)} \\ &= \frac{1}{2}+\frac{b-(c_A-c_B)}{2w} \end{aligned}$$</code>
while party <code>\(B\)</code>'s vote share is
<code>$$\begin{aligned} s_B &= 1-s_A(c_A,c_B) \\ &= \frac{1}{2}-\frac{b-(c_A-c_B)}{2w}. \end{aligned}$$</code>
Parties <code>\(A\)</code> and <code>\(B\)</code> engage in a form of Cournot competition: they choose corruptions <code>\(c_k\)</code> independently and simultaneously, with full knowledge of the (inverse) demand curves <code>\(s_k(c_A,c_B)\)</code>.
These curves are downward-sloping: the “price” <code>\(s_k\)</code>, reflecting voters’ willingness to spend their votes on party <code>\(k\)</code>, falls with the chosen “quantity” <code>\(c_k\)</code>.
Corruptions <code>\(c_A\)</code> and <code>\(c_B\)</code> are substitutes in the sense that, e.g., the price <code>\(s_A\)</code> rises with <code>\(c_B\)</code>.</p>
<h2 id="equilibrium">Equilibrium</h2>
<p>The competition over corruption levels resolves at a <a href="https://en.wikipedia.org/wiki/Nash_equilibrium">Nash equilibrium</a> in which each party chooses optimally given the other party’s choice.
For party <code>\(A\)</code>, this means choosing <code>\(c_A^*\)</code> to satisfy the first-order condition
<code>$$\newcommand{\parfrac}[2]{\frac{\partial\,#1}{\partial\,#2}} \begin{aligned} 0 &= \parfrac{}{c_A^*}\left(c_A^*\,s_A(c_A^*,c_B)\right) \\ &= \frac{1}{2}+\frac{b-2c_A^*+c_B}{2w}, \end{aligned}$$</code>
which can be rewritten as
<code>$$2c_A^*-c_B=w+b.$$</code>
Similarly, the first-order condition for party <code>\(B\)</code>'s optimal choice <code>\(c_B^*\)</code> can be written as
<code>$$-c_A+2c_B^*=w-b.$$</code>
Therefore, the Nash equilibrium <code>\((c_A^*,c_B^*)\)</code> levels of corruption satisfy the linear system
<code>$$\begin{bmatrix}2&-1\\-1&2\end{bmatrix}\begin{bmatrix}c_A^*\\c_B^*\end{bmatrix}=\begin{bmatrix}w+b\\w-b\end{bmatrix},$$</code>
which has unique solution
<code>$$\begin{bmatrix}c_A^*\\c_B^*\end{bmatrix}=\frac{1}{3}\begin{bmatrix}3w+b\\3w-b\end{bmatrix}.$$</code>
Party <code>\(A\)</code>'s vote share in this equilibrium is
<code>$$s_A(c_A^*,c_B^*)=\frac{1}{2}\left(1-\frac{b}{3w}\right),$$</code>
which exceeds 50% if and only if <code>\(b\)</code> is <em>negative</em>; that is, when voters are biased <em>against</em> party <code>\(A\)</code>.
In that case, party <code>\(A\)</code> can’t “sell” as much corruption as party <code>\(B\)</code> because voters aren’t as tolerant of <code>\(A\)</code>'s corruption as <code>\(B\)</code>'s.
But the price elasticity of corruption is party-invariant, so selling less corruption <code>\(c_A^*\)</code> allows party <code>\(A\)</code> to obtain a higher price <code>\(s_A(c_A^*,c_B^*)\)</code> than party <code>\(B\)</code>.
Nonetheless, both parties obtain the same “corruption revenue” in equilibrium:
<code>$$c_k^*s_k(c_A^*,c_B^*)=\frac{w}{2}-\frac{b^2}{18w}.$$</code></p>
<h2 id="comparative-statics">Comparative statics</h2>
<p>Differentiating the Nash equilibrium corruption levels <code>\(c_A^*\)</code> and <code>\(c_B^*\)</code> with respect to the mean bias <code>\(b\)</code> gives
<code>$$\parfrac{c_A^*}{b}=\frac{1}{3}=-\parfrac{c_B^*}{b},$$</code>
implying that if <code>\(b\)</code> increases then party <code>\(A\)</code> becomes more corrupt by exactly the amount that party <code>\(B\)</code> becomes less corrupt.
Indeed, aggregate corruption <code>\(c_A^*+c_B^*=2w\)</code> is constant in <code>\(b\)</code> but increases with <code>\(w\)</code>.
Both parties become more corrupt (in equilibrium) when <code>\(w\)</code> rises:
<code>$$\parfrac{c_A^*}{w}=1=\parfrac{c_B^*}{w}.$$</code>
Intuitively, if <code>\(w\)</code> rises then voters become less sensitive to corruption because their preferences become noisier.
Both parties exploit this fall in sensitivity by becoming more corrupt, which makes them better off because
<code>$$\parfrac{}{w}\left(c_k^*\,s_k(c_A^*,c_B^*)\right)=\frac{1}{2}+\frac{b^2}{18w^2}$$</code>
is strictly positive.
On the other hand, if <code>\(b\)</code> rises then voters become more willing to tolerate party <code>\(A\)</code>'s corruption and less willing to tolerate party <code>\(B\)</code>'s.
Party <code>\(A\)</code> responds to this shift in relative tolerance by selling more corruption, albeit at a lower price <code>\(s_A(c_A^*,c_B^*)\)</code>.</p>
<h2 id="numerical-example">Numerical example</h2>
<p>The Nash equilibrium corruption levels lie at the intersection of party <code>\(A\)</code>'s best response curve
<code>$$c_A^*=\frac{c_B+w+b}{2}$$</code>
and party <code>\(B\)</code>'s best response curve
<code>$$c_B^*=2c_A+w-b,$$</code>
obtained by rearranging the first-order conditions for <code>\(c_A^*\)</code> and <code>\(c_B^*\)</code>.
The chart below plots these curves when <code>\(b=3\)</code> and <code>\(w=5\)</code>.
The curves intersect at <code>\((c_A^*,c_B^*)=(6,4)\)</code>, where party <code>\(A\)</code> wins a vote share of <code>\(s_A(c_A^*,c_B^*)=40\%\)</code>.</p>
<p><img src="figures/nash-1.svg" alt=""></p>
<p>Now suppose the mean bias in favor of party <code>\(A\)</code> rises to <code>\(b=9\)</code>.
The chart below shows how this rise shifts parties’ best response curves in the <code>\(c_Ac_B\)</code> plane.
These shifts move the Nash equilibrium rightward to <code>\((c_A^*,c_B^*)=(8,2)\)</code>.
Party <code>\(A\)</code>'s vote share falls to <code>\(s_A(c_A^*,c_B^*)=20\%\)</code>, and both parties’ corruption revenues <code>\(c_k^*s_k(c_A^*,c_B^*)\)</code> fall from <code>\(2.4\)</code> to <code>\(1.6\)</code>.</p>
<p><img src="figures/nash-shift-1.svg" alt=""></p>
<h2 id="alternative-models">Alternative models</h2>
<p>With proportional representation, every vote for party <code>\(k\)</code> gives that party more power to engage in corrupt activities.
Consequently, the party trades off its corruption level <code>\(c_k\)</code> with its vote share <code>\(s_k(c_A,c_B)\)</code> continuously.
In contrast, if only the party with a majority vote share gains power then corruption revenues become discontinuous in vote shares.
This discontinuity changes the equilibrium choices of <code>\(c_A\)</code> and <code>\(c_B\)</code>.
For example, if electoral ties are resolved with a coin toss then the unique equilibrium gives each party a 50% vote share independently of <code>\(b\)</code> and <code>\(w\)</code>, and the corruption levels satisfy <code>\(c_A^*=c_B^*+b\)</code> (as opposed to <code>\(c_A^*=c_B^*+2b/3\)</code> under proportional representation).</p>
<p>One way to generalize the model with proportional representation is to introduce voting blocs: groups of voters with group-specific mean biases <code>\(b_j\)</code> and radii <code>\(w_j\)</code>.
Then the equilibrium corruption levels become
<code>$$c_A^*=\frac{3+\sum_jb_j\theta_j/w_j}{3\sum_j\theta_j/w_j}$$</code>
and
<code>$$c_B^*=\frac{3-\sum_jb_j\theta_j/w_j}{3\sum_j\theta_j/w_j},$$</code>
where <code>\(\theta_j\)</code> is group <code>\(j\)</code>'s share of the population.
Intuitively, the equilibrium depends on the aggregate bias and precision of voters’ preferences, but these aggregates depend on the group-specific biases <code>\(b_j\)</code> and precisions <code>\(1/w_j\)</code> as well as the relative group sizes <code>\(\theta_j\)</code>.
Introducing voting blocs makes the comparative statics more intricate but preserves the underlying intuitions.</p>
Snowball sampling bias in program evaluation
https://bldavies.com/blog/snowball-sampling-bias-program-evaluation/
Sat, 04 Sep 2021 00:00:00 +0000https://bldavies.com/blog/snowball-sampling-bias-program-evaluation/<p>Suppose I want to run a pilot study of a mental health support program before rolling it out at scale.
The program has heterogeneous treatment effects, but tends to be more effective for people who have fewer social connections.
Such people tend to have lower mental health (<a href="https://doi.org/10.1093/jurban/78.3.458">Kawachi and Berkman, 2001</a>) and so have more to gain from participating in the program.</p>
<p>I recruit people to my study via <a href="https://en.wikipedia.org/wiki/Snowball_sampling">snowball sampling</a>: I advertise it to a few initial seeds, who share the ads with their friends, who share the ads with their friends, and so on.
Everyone who sees an ad participates.
But some people are more likely to see ads than others: in particular, people with more friends have more chances to be sent an ad.
Consequently, I will tend to under-estimate the average treatment effect (ATE) of the program because people with more social connections, for whom the program is less effective, are more likely to appear in my pilot sample.
Such under-estimation may lead me to abandon the program even if its mental health benefits actually outweigh its implementation costs.</p>
<h2 id="demonstration">Demonstration</h2>
<p>As a concrete example, suppose each individual <code>\(i\)</code> has degree <code>\(d_i\)</code> in the social network from which I recruit my sample.
The treatment effect of the program on individual <code>\(i\)</code> is
<code>$$\beta_i=1-r\bar{d}_i+(1-r)z_i,$$</code>
where <code>\(\bar{d}_i\)</code> is a normalization of <code>\(d_i\)</code> with zero mean and unit variance across the network, the <code>\(z_i\)</code> are iid standard normal, and <code>\(r\)</code> is a parameter controlling the (negative) correlation between the <code>\(\beta_i\)</code> and <code>\(d_i\)</code>.
The treatment effects <code>\(\beta_i\)</code> give rise to individual-level outcomes
<code>$$y_i=\beta_it_i+\epsilon_i,$$</code>
where the <code>\(t_i\)</code> are binary treatment indicators and the <code>\(\epsilon_i\)</code> are iid standard normal errors.
The sample delivers an estimate
<code>$$\hat\beta=\frac{\sum_iy_it_i}{\sum_it_i}-\frac{\sum_iy_i(1-t_i)}{\sum_i(1-t_i)}$$</code>
of the program’s ATE: the difference in mean outcomes between treated and untreated members of the pilot sample.
Treatments are assigned to sample members randomly.
But the sample is recruited non-randomly: individual <code>\(i\)</code> is recruited with probability proportional to their degree <code>\(d_i\)</code>.
This non-random recruitment leads to sampling bias when the <code>\(\beta_i\)</code> and <code>\(d_i\)</code> are correlated.</p>
<p>The chart below summarizes the distribution of ATE estimates across 500 snowball samples of 250 people from a random social network.
This network contains 1,000 bilateral friendships among 1,000 people.
Network degrees vary between zero and seven, producing variation in the probability of being sampled.
I randomize the treatment effects <code>\(\beta_i\)</code> and assignments <code>\(t_i\)</code> in each simulation run.</p>
<p><img src="figures/example-1.svg" alt=""></p>
<p>The ATE estimate is unbiased when treatment effects are uncorrelated with network degrees.
However, the estimate becomes more biased as the correlation becomes stronger.
Intuitively, the more the program’s effectiveness is concentrated among low-degree individuals, the worse the program looks in pilot samples excluding those individuals (independently of how treatments are assigned).</p>
<h2 id="potential-solutions">Potential solutions</h2>
<p>How can we mitigate snowball sampling bias?
One approach is to collect information about sample members’ degrees in the social network, and use this information to obtain weighted ATE estimates.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>
The difference-in-means estimator <code>\(\hat\beta\)</code> equals the OLS estimator of <code>\(\beta\)</code> in the linear model
<code>$$y_i=\beta t_i+\varepsilon_i$$</code>
relating outcomes to treatment assignments.
Using <a href="https://en.wikipedia.org/wiki/Weighted_least_squares">weighted least squares</a> (WLS) with weights <code>\(w_i=1/\sqrt{d_i}\)</code> may deliver less biased estimates by accounting for the probability of sampling each individual <code>\(i\)</code>.
Intuitively, individuals with lower degrees provide relatively more information about the true ATE because they are less likely to be sampled, and so giving these individuals higher weights in the estimation procedure leads to more informed estimates.<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>
However, the distribution of degrees <code>\(d_i\)</code> in the sample is different than the distribution of degrees in the full network, and so weighting by the (observed) <code>\(d_i\)</code> may still deliver different (and thus incorrect) estimates than weighting by the (unobserved) sampling probabilities.</p>
<p>Another approach, suggested by <a href="https://dx.doi.org/10.2139/ssrn.3522256">Jackson et al. (2020)</a>, is to model sample recruitment explicitly using game theory.
The authors describe a game wherein each individual’s recruitment payoff depends on whether their peers are recruited.
The equilibrium of this game determines each individual’s recruitment probability conditional on the network structure (and other covariates).
Jackson et al. embed this game in an estimation procedure based on <a href="https://en.wikipedia.org/wiki/Propensity_score_matching">propensity score matching</a>, and show theoretically and empirically that this procedure leads to better ATE estimates.</p>
<hr>
<p><em>Thanks to <a href="https://twitter.com/RyanBrennanEcon">Ryan Brennan</a> for discussing the ideas presented in this post.</em></p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>This approach is conceptually similar to the “respondent-driven sampling” technique described by <a href="https://doi.org/10.1111/j.0081-1750.2004.00152.x">Salganik and Heckathorn (2004)</a>. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>Taking square roots recognizes that the objective of WLS is to minimize the weighted sum of <em>squared</em> residuals. <a href="#fnref:2" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Improving human predictions
https://bldavies.com/blog/improving-human-predictions/
Tue, 17 Aug 2021 00:00:00 +0000https://bldavies.com/blog/improving-human-predictions/<p>Chapter 9 of <a href="https://readnoise.com">Kahneman et al. (2021)</a> discusses how predictions made by humans can be less accurate than predictions made using statistical models.
Part of the chapter describes research by <a href="https://doi.org/10.1037/h0029230">Goldberg (1970)</a> and subsequent authors showing that models of human predictions can out-perform the humans on which those models are based.</p>
<p>For example, suppose I’m asked to make predictions in a range of contexts <code>\(i\in\{1,2,\ldots,n\}\)</code>.
My goal is to use some contextual data <code>\(x_i\in\mathbb{R}^k\)</code> to predict the value of a context-specific outcome <code>\(y_i\)</code>.
I generate predictions
<code>$$\newcommand{\abs}[1]{\lvert#1\rvert} \bar{y}_i=y_i+u_i,$$</code>
where the <code>\(u_i\)</code> are context-specific errors.
The accuracy of my predictions can be measured via their <a href="https://en.wikipedia.org/wiki/Mean_squared_error">mean squared error</a> (MSE)
<code>$$\frac{1}{n}\sum_{i=1}^n(\bar{y}_i-y_i)^2=\frac{1}{n}\sum_{i=1}^nu_i^2,$$</code>
where a lower MSE implies higher accuracy.
Another way to generate predictions could be to posit a linear model
<code>$$y_i=\theta x_i+\epsilon_i,$$</code>
where <code>\(\theta\)</code> is a row vector of coefficients and the <code>\(\epsilon_i\)</code> are random errors.
But I don’t know the true outcomes <code>\(y_i\)</code>—hence needing to predict them—and so I can’t just use ordinary least squares (OLS) to estimate <code>\(\theta\)</code>.
Instead, Goldberg (1970) suggests replacing this linear model with
<code>$$\bar{y}_i=\beta x_i+\varepsilon_i,$$</code>
where <code>\(\beta\)</code> is a (possibly different) vector of coefficients and the <code>\(\varepsilon_i\)</code> are (possibly different) random errors.
This second model describes the linearized relationship between my (possibly incorrect) predictions <code>\(\bar{y}_i\)</code> and the data <code>\(x_i\)</code> on which those predictions are based.
Since I know my predictions <code>\(\bar{y}_i\)</code>, I can use OLS to obtain an estimate <code>\(\hat\beta\)</code> of <code>\(\beta\)</code> and produce a set of “modeled predictions”
<code>$$\hat{y}_i=\hat\beta x_i.$$</code>
The difference between the <code>\(\bar{y}_i\)</code> and <code>\(\hat{y}_i\)</code> is that the latter ignore the non-linearities in my method for generating predictions.
Intuitively, the <code>\(\hat{y}_i\)</code> represent what I would predict using a simple, linear formula; my predictions <code>\(\bar{y}_i\)</code> may be generated using a formula that is much more complex, or may not be generated using a formula at all.</p>
<p>So, how do my raw predictions <code>\(\bar{y}_i\)</code> and their modeled counterparts <code>\(\hat{y}_i\)</code> compare?
The chart below plots the <code>\(\bar{y}_i\)</code> and <code>\(\hat{y}_i\)</code> against the true values <code>\(y_i\)</code> when</p>
<ol>
<li>the <code>\(x_i\)</code> and <code>\(u_i\)</code> are iid standard normal, and</li>
<li><code>\(y_i=(x_i+z_i)/2\)</code> with <code>\(z_i\)</code> iid standard normal.</li>
</ol>
<p>The modeled predictions are far more accurate: they have an MSE of 0.22, whereas my raw predictions have an MSE of 0.76.
In this case, the true relationship between the <code>\(y_i\)</code> and <code>\(x_i\)</code> is linear, and so a linear model of my predictions is well-placed to out-perform those predictions.</p>
<p><img src="figures/example-1.svg" alt=""></p>
<p>However, modeling predictions does not always improve their accuracy.
For example, suppose the contextual data <code>\(x_i\)</code> are scalars, and the <code>\(x_i\)</code>, <code>\(y_i\)</code>, and <code>\(u_i\)</code> have zero means.
Then the MSE of the modeled predictions turns out to be
<code>$$\frac{1}{n}\sum_{i=1}^n(\hat{y}_i-y_i)^2=\sigma_y^2+\rho_{ux}^2\sigma_u^2-\rho_{xy}^2\sigma_y^2,$$</code>
where <code>\(\sigma_y^2\)</code> and <code>\(\sigma_u^2\)</code> are the variances of the <code>\(y_i\)</code> and <code>\(u_i\)</code>, where <code>\(\rho_{ux}\)</code> is the correlation of the <code>\(u_i\)</code> and <code>\(x_i\)</code>, and where <code>\(\rho_{xy}\)</code> is the correlation of the <code>\(x_i\)</code> and <code>\(y_i\)</code>.
Consequently, replacing my raw predictions <code>\(\bar{y}_i\)</code> with their modeled counterparts <code>\(\hat{y}_i\)</code> leads to an accuracy improvement if and only if
<code>$$\sigma_y^2(1-\rho_{xy}^2)<\sigma_u^2(1-\rho_{ux}^2).$$</code>
This condition holds in the example plotted above: both <code>\(\sigma_u^2\)</code> and <code>\(\sigma_y^2\)</code> equal unity, but <code>\(\rho_{xy}=0.69\)</code> is much larger in absolute value than <code>\(\rho_{ux}=-0.09\)</code>.
In general, the condition is most likely to hold when</p>
<ol>
<li><code>\(\sigma_u^2\)</code> is larger than <code>\(\sigma_y^2\)</code> (i.e., my raw predictions are relatively noisy);</li>
<li><code>\(\abs{\rho_{xy}}\)</code> is large (i.e., the relationship between the <code>\(y_i\)</code> and <code>\(x_i\)</code> is approximately linear and deterministic); and</li>
<li><code>\(\abs{\rho_{ux}}\)</code> is small (i.e., the errors <code>\(u_i\)</code> in my raw predictions are relatively uncorrelated with the <code>\(x_i\)</code>).</li>
</ol>
<p>Intuitively, if the outcomes <code>\(y_i\)</code> are a linear function of the <code>\(x_i\)</code> (i.e., if <code>\(\abs{\rho_{xy}}=1\)</code>) then linearizing my predictions improves their accuracy by removing non-linear errors.
On the other hand, if <em>my prediction errors</em> <code>\(u_i\)</code> are a linear function of the <code>\(x_i\)</code> (i.e., if <code>\(\abs{\rho_{ux}}=1\)</code>) then linearizing my predictions cannot improve their accuracy because there are no non-linear errors to remove.</p>
Dead ends
https://bldavies.com/blog/dead-ends/
Wed, 28 Jul 2021 00:00:00 +0000https://bldavies.com/blog/dead-ends/<p>We can think of research as having two phases:</p>
<ol>
<li>a “creative phase,” during which researchers search for new ideas, and</li>
<li>a “working phase,” during which researchers exert effort on an idea.</li>
</ol>
<p><a href="https://doi.org/10.1016/j.jet.2020.105167">Sadler (2021)</a> views these phases as complementary productive inputs, and analyzes how and why the optimal input mix changes when features of the research environment change.
For example, if creativity becomes more expensive then researchers spend more time working on each idea and, in Sadler’s model, tend to work on lower quality ideas.
Policymakers can use subsidies and taxes to change the relative costs of creativity and work, thereby influencing how researchers allocate their time.</p>
<p>The core features of Sadler’s model are as follows.
A researcher wants to maximize the (sum of discounted) payoffs from their life’s work on research ideas.
Different ideas have different qualities, and the researcher knows an idea’s quality when they find it.
What they don’t know is whether each idea is feasible: some ideas are “dead ends” in the sense that they cannot generate payoffs, no matter their quality or how long the researcher works on them.
The longer the researcher works on an idea without it paying off, the more they start to believe the idea is a dead end.
The researcher can act on this belief at any time by abandoning an idea and searching for a new one.</p>
<p>In Sadler’s model, the researcher continues working on an idea if and only if the expected payoff exceeds the sum of two costs: (i) the effort required to work on the idea and (ii) the opportunity cost of not searching for another idea.
This opportunity cost depends on the researcher’s discount rate for future payoffs because new ideas take time to find.
The expected payoff from working on an idea falls over time, implying that there is a unique amount of time that the researcher spends on each idea before abandoning it.
This amount of time is larger for ideas with higher quality.</p>
<p>If effort becomes more costly then the researcher spends less time working on each idea and focuses their effort on higher quality ideas.
Intuitively, this rise in effort costs makes the creative phase relatively cheaper, so the researcher substitutes towards it.
On the other hand, if the researcher’s discount rate rises (i.e., they become less patient) then they spend <em>more</em> time in the working phase and allow themselves to work on <em>lower</em> quality ideas.
This is because the discounted payoff from continuing to work on an idea falls by less than the discounted opportunity cost of searching for a new idea.</p>
<p>Sadler’s model helps explain why organizations with a shorter-term focus (i.e., a higher discount rate) tend to be less innovative: they spend too much time working on low quality ideas, and not enough time searching for high quality ideas.
In contrast, organizations that use longer-term incentives, such as <a href="https://en.wikipedia.org/wiki/Employee_stock_option">stock options</a> and <a href="https://en.wikipedia.org/wiki/Golden_parachute">golden parachutes</a>, tend to be more innovative (<a href="https://doi.org/10.1162/rest.89.4.634">Lerner and Wulf, 2007</a>; <a href="https://dx.doi.org/10.2139/ssrn.1953947">Francis et al., 2011</a>).</p>
<p>Sadler’s model also demonstrates how subsidies and taxes can influence how researchers allocate their time.
Subsidizing effort during the working phase lowers the relative cost of that phase, and so researchers spend more time working but tend to work on lower quality ideas.
Taxing payoffs raises the quality threshold for abandoning an idea, and so researchers spend <em>less</em> time in the working phase but tend to work on <em>higher</em> quality ideas.
The optimal policy depends on the social value of research: the more convex is that value in idea quality, the more society wants researchers to focus on fewer but higher quality ideas, and so the more attractive are taxes relative to subsidies.</p>
nberwp is now on CRAN
https://bldavies.com/blog/nberwp-cran/
Wed, 21 Jul 2021 00:00:00 +0000https://bldavies.com/blog/nberwp-cran/<p>nberwp, an R package providing information on <a href="https://www.nber.org/papers">NBER working papers</a> and their authors, is now <a href="https://cran.r-project.org/package=nberwp">available on CRAN</a>.
The current version (1.0.0) covers 29,434 papers published between June 1973 and June 2021.
It can be installed via</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">install.packages</span><span class="p">(</span><span class="s">'nberwp'</span><span class="p">)</span>
</code></pre></div><p>nberwp has evolved since <a href="https://bldavies.com/blog/introducing-nberwp/">its initial release</a> on GitHub nearly two years ago.
This post describes some of the main changes.</p>
<h2 id="more-papers">More papers</h2>
<p>The first version of nberwp covered papers published between June 1973 and December 2018.
The updated version adds papers published between January 2019 and June 2021, allowing one to visualize the spike in publications when COVID-19 emerged:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">dplyr</span><span class="p">)</span>
<span class="nf">library</span><span class="p">(</span><span class="n">ggplot2</span><span class="p">)</span>
<span class="nf">library</span><span class="p">(</span><span class="n">nberwp</span><span class="p">)</span>
<span class="n">papers</span> <span class="o">%>%</span>
<span class="nf">count</span><span class="p">(</span><span class="n">Quarter</span> <span class="o">=</span> <span class="n">year</span> <span class="o">+</span> <span class="p">(</span><span class="nf">ceiling</span><span class="p">(</span><span class="n">month</span> <span class="o">/</span> <span class="m">3</span><span class="p">)</span> <span class="o">-</span> <span class="m">1</span><span class="p">)</span> <span class="o">/</span> <span class="m">4</span><span class="p">,</span> <span class="n">name</span> <span class="o">=</span> <span class="s">'New papers'</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">ggplot</span><span class="p">(</span><span class="nf">aes</span><span class="p">(</span><span class="n">Quarter</span><span class="p">,</span> <span class="n">`New papers`</span><span class="p">))</span> <span class="o">+</span>
<span class="nf">geom_line</span><span class="p">()</span> <span class="o">+</span>
<span class="nf">labs</span><span class="p">(</span><span class="n">title</span> <span class="o">=</span> <span class="s">'COVID-19 induced a spike in NBER publications'</span><span class="p">,</span>
<span class="n">subtitle</span> <span class="o">=</span> <span class="s">'New NBER working papers, by quarter'</span><span class="p">)</span>
</code></pre></div><p><img src="figures/covid-1.svg" alt=""></p>
<p>nberwp now also includes papers published in the historical and technical working paper series.
The historical series contains 136 papers focused on (American) economic history, and the technical series contains 337 papers focused on analytical and empirical methods.</p>
<p>The working paper data exclude duplicates (e.g., papers published in multiple series) but include revisions, which capture continued development of (and collaboration on) research ideas that I believe should be acknowledged.</p>
<h2 id="program-affiliations">Program affiliations</h2>
<p>The NBER organizes its research into <a href="https://www.nber.org/programs-projects/programs-working-groups">programs</a>, each of which “corresponds loosely to a traditional field of study within economics.”
nberwp now provides a table of paper-program correspondences</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">paper_programs</span>
</code></pre></div><pre><code>## # A tibble: 53,996 x 2
## paper program
## <chr> <chr>
## 1 w0074 EFG
## 2 w0087 IFM
## 3 w0087 ITI
## 4 w0107 PE
## 5 w0116 PE
## 6 w0117 LS
## 7 w0129 HE
## 8 w0131 IFM
## 9 w0131 ITI
## 10 w0134 HE
## # … with 53,986 more rows
</code></pre><p>as well as a table of program descriptions:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">programs</span>
</code></pre></div><pre><code>## # A tibble: 21 x 3
## program program_desc program_category
## <chr> <chr> <chr>
## 1 AG Economics of Aging Micro
## 2 AP Asset Pricing Finance
## 3 CF Corporate Finance Finance
## 4 CH Children Micro
## 5 DAE Development of the American Economy Micro
## 6 DEV Development Economics Micro
## 7 ED Economics of Education Micro
## 8 EEE Environment and Energy Economics Micro
## 9 EFG Economic Fluctuations and Growth Macro/International
## 10 HC Health Care Micro
## # … with 11 more rows
</code></pre><p>The <code>program_category</code> column categorizes programs similarly to <a href="https://www.nber.org/papers/w23953">Chari and Goldsmith-Pinkham (2017)</a>.
On average, each paper is affiliated with 1.83 programs and each program has 2,571 affiliated papers.</p>
<p>One use of the paper-program correspondences is to analyze the intellectual overlaps among programs.
For example, the table below presents the six pairs of programs with the most-overlapping sets of affiliated papers, with overlap sizes measured by <a href="https://en.wikipedia.org/wiki/Jaccard_index">Jaccard indices</a>.
The top index of 0.29 means that about 29% of the papers affiliated with the Children or Economics of Education programs are affiliated with both.</p>
<table>
<thead>
<tr>
<th align="left">Program 1</th>
<th align="left">Program 2</th>
<th align="right">Jaccard index</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Children</td>
<td align="left">Economics of Education</td>
<td align="right">0.29</td>
</tr>
<tr>
<td align="left">Health Care</td>
<td align="left">Health Economics</td>
<td align="right">0.29</td>
</tr>
<tr>
<td align="left">International Finance and Macroeconomics</td>
<td align="left">International Trade and Investment</td>
<td align="right">0.26</td>
</tr>
<tr>
<td align="left">Economic Fluctuations and Growth</td>
<td align="left">Monetary Economics</td>
<td align="right">0.23</td>
</tr>
<tr>
<td align="left">Asset Pricing</td>
<td align="left">Corporate Finance</td>
<td align="right">0.17</td>
</tr>
<tr>
<td align="left">Labor Studies</td>
<td align="left">Public Economics</td>
<td align="right">0.15</td>
</tr>
</tbody>
</table>
<h2 id="authorships">Authorships</h2>
<p>nberwp now contains information about working papers’ (co-)authors:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">authors</span>
</code></pre></div><pre><code>## # A tibble: 15,437 x 4
## author name user_nber user_repec
## <chr> <chr> <chr> <chr>
## 1 w0001.1 Finis Welch finis_welch <NA>
## 2 w0002.1 Barry R Chiswick barry_chiswick pch425
## 3 w0003.1 Swarnjit S Arora swarnjit_arora <NA>
## 4 w0004.1 Lee A Lillard <NA> pli669
## 5 w0005.1 James P Smith james_smith psm28
## 6 w0006.1 Victor Zarnowitz victor_zarnowitz <NA>
## 7 w0007.1 Lewis C Solmon <NA> <NA>
## 8 w0008.1 Merle Yahr Weiss <NA> <NA>
## 9 w0008.2 Robert E Lipsey robert_lipsey pli259
## 10 w0010.1 Paul W Holland <NA> <NA>
## # … with 15,427 more rows
</code></pre><p>The <code>author</code> column contains unique author identifiers, constructed by concatenating each author’s debut paper and their position on that paper’s (alphabetized) byline.
This construction ensures that <code>author</code> values do not change when I add newly published papers to the data.
The <code>user_nber</code> column contains authors’ usernames on the NBER website; the <code>user_repec</code> column contains authors’ <a href="https://ideas.repec.org">RePEc</a> IDs.
Some authors do not have an NBER username or RePEc ID, indicated by <code>NA</code> values in the appropriate column.</p>
<p>nberwp also provides a table of paper-author correspondences:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">paper_authors</span>
</code></pre></div><pre><code>## # A tibble: 67,090 x 2
## paper author
## <chr> <chr>
## 1 w0001 w0001.1
## 2 w0002 w0002.1
## 3 w0003 w0003.1
## 4 w0004 w0004.1
## 5 w0005 w0005.1
## 6 w0006 w0006.1
## 7 w0007 w0007.1
## 8 w0008 w0008.1
## 9 w0008 w0008.2
## 10 w0009 w0004.1
## # … with 67,080 more rows
</code></pre><p>This table can be used to construct a co-authorship network among the 15,437 authors identified in nberwp.
This network currently contains 38,968 edges, implying that 0.03% of pairs co-authored at least one working paper during the period covered by the data.
Authors in the network have a mean degree of 5.05.</p>
<p>I used previous versions of nberwp in blog posts on <a href="https://bldavies.com/blog/triadic-closure-nber/">triadic closure</a> and <a href="https://bldavies.com/blog/female-representation-collaboration-nber/">female representation</a>.
These posts assumed that authors were uniquely identified by their full names.
This assumption was problematic: different authors could share the same name, or a single author could publish under many names (e.g., before and after marriage).
The updated version of nberwp builds on <a href="https://bldavies.com/blog/nber-co-authorships/">previous efforts to disambiguate authors’ names</a>—namely cross-referencing against NBER usernames, RePEc IDs, common co-authorships, and name edit distances—in three ways:</p>
<ol>
<li>using paper-program correspondences to identify authors who have similar names and published papers in similar programs, and so are likely to be the same person;</li>
<li>manually merging (or splitting) authors whom I determine to be the same (or distinct) based on their personal or academic websites;</li>
<li>including an author ID variable (<code>author</code>) rather than relying on names for unique identification.</li>
</ol>
<p>These enhancements support cleaner analyses of (co-)authorship behavior.
Nonetheless the data may still contain errors—if you find any, let me know by adding an issue on <a href="https://github.com/bldavies/nberwp">GitHub</a>.</p>
Coefficients of correlated regressors
https://bldavies.com/blog/coefficients-correlated-regressors/
Wed, 07 Jul 2021 00:00:00 +0000https://bldavies.com/blog/coefficients-correlated-regressors/<p>Linear models cannot be estimated <a href="https://en.wikipedia.org/wiki/Multicollinearity">when regressors are perfectly correlated</a>, and their coefficients have large variances when regressors are almost-perfectly correlated.
But how does coefficients’ correlation depend on regressors’ correlation?</p>
<p>To answer this question, suppose I have data <code>\((y_i,x_i,z_i)_{i=1}^n\)</code> generated by the process
<code>$$\newcommand{\abs}[1]{\lvert#1\rvert} \DeclareMathOperator{\Cor}{Cor} \DeclareMathOperator{\Cov}{Cov} \DeclareMathOperator{\Var}{Var} \renewcommand{\epsilon}{\varepsilon} y_i=\beta_1x_i+\beta_2z_i+\epsilon_i,$$</code>
where the <code>\(x_i\)</code> and <code>\(z_i\)</code> are normalized to have zero mean and unit variance, and where the <code>\(\epsilon_i\)</code> are iid with zero mean and zero correlation with the <code>\(x_i\)</code> and <code>\(z_i\)</code>.
If the <code>\(x_i\)</code> and <code>\(z_i\)</code> are not perfectly correlated then the OLS estimator <code>\(\hat\beta\)</code> of the coefficient vector <code>\((\beta_1,\beta_2)\)</code> has variance
<code>$$\DeclareMathOperator{\Var}{Var} \Var(\hat\beta)=\frac{\sigma^2}{n(1-\rho^2)}\begin{bmatrix}1&-\rho\\-\rho&1\end{bmatrix},$$</code>
where <code>\(\sigma^2\)</code> is the variance of the <code>\(\epsilon_i\)</code>, and where <code>\(\rho\)</code> is the (empirical) correlation of the <code>\(x_i\)</code> and <code>\(z_i\)</code>.
It follows that
<code>$$\Cor(\hat\beta_1,\hat\beta_2)=-\Cor(x_i,z_i)$$</code>
whenever the <code>\(x_i\)</code> and <code>\(z_i\)</code> are not perfectly correlated.
As their correlation grows, the mean slope of the data in the directions spanned by the <code>\(x_i\)</code> and <code>\(z_i\)</code> approaches <code>\((\beta_1+\beta_2)\)</code>, and so the OLS estimates <code>\(\hat\beta_1\)</code> and <code>\(\hat\beta_2\)</code> increasingly “compete” for contributions to their sum: if sampling error leads to one coefficient being over-estimated then the other coefficient must be under-estimated to preserve the sum.
This competition drives the decreasing correlation of <code>\(\hat\beta_1\)</code> and <code>\(\hat\beta_2\)</code> as the <code>\(x_i\)</code> and <code>\(z_i\)</code> become more correlated.</p>
<p>The correlation of the <code>\(x_i\)</code> and <code>\(z_i\)</code> also determines the precision with which <code>\((\beta_1\pm\beta_2)\)</code> can be estimated.
In particular, the expression for <code>\(\Var(\hat\beta)\)</code> above implies
<code>$$\Var(\hat\beta_1\pm\hat\beta_2)=\frac{2\sigma^2}{n(1\pm\rho)}$$</code>
for <code>\(\abs{\rho}<1\)</code>.
As the <code>\(x_i\)</code> and <code>\(z_i\)</code> become more correlated (i.e., <code>\(\rho\)</code> rises), over-estimates of <code>\(\beta_1\)</code> must increasingly coincide with under-estimates of <code>\(\beta_2\)</code>, and so the estimate of <code>\((\beta_1+\beta_2)\)</code> becomes more precise because the errors cancel out.
Conversely, the estimate of <code>\((\beta_1-\beta_2)\)</code> becomes <em>less</em> precise as <code>\(\rho\)</code> rises because the errors in <code>\(\hat\beta_1\)</code> and <code>\(\hat\beta_2\)</code> amplify each other.</p>
<p>One application of this relationship between <code>\(\Var(\hat\beta_1\pm\hat\beta_2)\)</code> and <code>\(\rho\)</code> is to experimental design.
Suppose I want to estimate the effect of receiving two treatments—say, doses of a single vaccine—on some outcome of interest.
The <code>\(x_i\)</code> and <code>\(z_i\)</code> indicate whether individual <code>\(i\)</code> receives each dose, the coefficients <code>\(\beta_1\)</code> and <code>\(\beta_2\)</code> are the average treatment effects (ATEs) of receiving each dose, and the sum <code>\((\beta_1+\beta_2)\)</code> is the ATE of receiving both doses.
The most precise estimate of <code>\((\beta_1+\beta_2)\)</code> obtains when the treatments are perfectly positively correlated: that is, when people receive either zero or two doses, but no-one receives only one.
Intuitively, I learn more about the effect of receiving two doses from people who receive both than from people who receive only one, so the most informative experiment cannot have anyone who receives a single dose.</p>
<p>On the other hand, suppose I want to compare the effect of two distinct treatments—say, doses of different vaccines—on my outcome of interest.
Then I want to estimate <code>\((\beta_1-\beta_2)\)</code>, which I can do most precisely when the treatments are perfectly <em>negatively</em> correlated: that is, when people receive one type of vaccine or the other, but no-one receives both.
Intuitively, I learn more about the vaccines’ relative effects from people who receive one type than from people who receive both types because the two vaccines may have confounding effects.</p>
<hr>
<p><em>Thanks to <a href="https://chittaropeople.sites.stanford.edu">Lautaro Chittaro</a> for inspiring this post and commenting on a draft.</em></p>
Rationalizing negative splits
https://bldavies.com/blog/rationalizing-negative-splits/
Tue, 18 May 2021 00:00:00 +0000https://bldavies.com/blog/rationalizing-negative-splits/<p>Many competitive runners aim for <a href="https://en.wikipedia.org/wiki/Negative_split">negative splits</a>: running the second half of a race faster than the first.
A more general goal is to speed up as the race progresses.
This post analyzes the conditions under which that goal makes sense.
I derive these conditions mathematically, demonstrate them with <a href="#a-simple-example">an example</a>, and discuss some possible <a href="#extensions">extensions</a> to my analysis.</p>
<h2 id="when-is-speeding-up-optimal">When is speeding up optimal?</h2>
<p>Suppose I want to run a unit distance as fast as possible.
I choose a speed function <code>\(s:[0,1]\to(0,\infty)\)</code> that minimizes my total running time
<code>$$\newcommand{\der}{\mathrm{d}} \newcommand{\derfrac}[2]{\frac{\der #1}{\der #2}} \newcommand{\parfrac}[2]{\frac{\partial #1}{\partial #2}} T[s]:=\int_0^1\frac{1}{s(x)}\,\der x,$$</code>
where <code>\(x\)</code> indexes distance.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>
However, running uses energy, of which I have a limited supply <code>\(e(0)=1\)</code> at the start of my run and which evolves according to
<code>$$\parfrac{e(x)}{x}=-r(x,s(x),e(x)),$$</code>
where <code>\(r\)</code> determines the rate of energy consumption based on the instantaneous values of <code>\(x\)</code>, <code>\(s(x)\)</code>, and <code>\(e(x)\)</code>.
I assume that running faster uses more energy (i.e., <code>\(r\)</code> is increasing in <code>\(s(x)\)</code>) and that I use all of my energy (i.e, <code>\(e(1)=0\)</code>).</p>
<p>My interest is in how the shape of <code>\(s\)</code> depends on the shape of <code>\(r\)</code>.
In particular, I want to know what conditions I have to put on <code>\(r\)</code> to make <code>\(s\)</code> an increasing function of <code>\(x\)</code>.
I determine these conditions as follows.
First, I define the <a href="https://en.wikipedia.org/wiki/Hamiltonian_%28control_theory%29">Hamiltonian</a>
<code>$$H(x,s(x),e(x),\lambda(x))\equiv-\frac{1}{s(x)}-\lambda(x)r(x,s(x),e(x)),$$</code>
where <code>\(\lambda\)</code> is a co-state function.
Under some regularity conditions, I can choose the optimizing functions point-wise, so for convenience I let <code>\(x\in[0,1]\)</code> be arbitrary and suppress functions’ arguments.
Then <code>\(s\)</code> and <code>\(\lambda\)</code> satisfy the first-order conditions (FOCs)
<code>$$\begin{aligned} 0&=H_s=\frac{1}{s^2}-\lambda r_s \\ -\lambda_x&=H_e=-\lambda r_e, \end{aligned}$$</code>
where subscripts denote (partial) differentiation.
Differentiating the first FOC with respect to <code>\(x\)</code> gives
<code>$$\frac{2s_x}{s^3}=-\lambda_xr_s-\lambda r_{sx},$$</code>
which, after substituting back in the two FOCs and dividing by <code>\(2\lambda r_s\)</code>, becomes
<code>$$\frac{s_x}{s}=-\frac{1}{2}\left(r_e+\frac{r_{sx}}{r_s}\right).$$</code>
Thus, if <code>\(s_x>0\)</code> then at least one of two conditions on <code>\(r\)</code> must hold:</p>
<ol>
<li><code>\(r_e<0\)</code>, which means that I use energy faster when I have less of it;</li>
<li><code>\(r_{sx}/r_s<0\)</code>, which, coupled with the assumption that <code>\(r_s>0\)</code>, means that the energy cost of running fast falls as I cover more distance.</li>
</ol>
<p>The intuition for the first condition is as follows:
energy falls with distance, and if it starts falling faster then I have to start running faster to avoid running out of energy before the finish line.
The second condition amplifies this motive to speed up by lowering the cost of running fast as the finish line approaches.
I don’t know enough about physiology to know which condition is more plausible, but from experience I’m sympathetic to the second: I’m much less likely to <a href="https://en.wikipedia.org/wiki/Hitting_the_wall">bonk</a> while running if I warm up slowly than if I sprint out of the gate.</p>
<h2 id="a-simple-example">A simple example</h2>
<p>Suppose I consume energy at the rate
<code>$$r(x,s(x),e(x))=(1-ax)s(x)$$</code>
for some parameter <code>\(a\in(0,1)\)</code>, which determines how the energy cost of running fast changes during my run.
That cost is approximately constant when <code>\(a\approx0\)</code> and becomes more decreasing in <code>\(x\)</code> as <code>\(a\)</code> approaches unity.
Given this definition of <code>\(r\)</code>, and given the boundary conditions <code>\(e(0)=1\)</code> and <code>\(e(1)=0\)</code>, the time-minimizing speed and energy profiles are
<code>$$s(x)=\frac{2\left(1-(1-a)^{3/2}\right)}{3a\sqrt{1-ax}}$$</code>
and
<code>$$e(x)=1-\frac{1-(1-ax)^{3/2}}{1-(1-a)^{3/2}}.$$</code>
Then <code>\(s\)</code> is an increasing function of <code>\(x\)</code> and becomes more convex as <code>\(a\)</code> rises.
It turns out that <code>\(T[s]=1\)</code> for all <code>\(a\in(0,1)\)</code>, so varying <code>\(a\)</code> preserves the mean speed <code>\(1/T[s]=1\)</code> but varies the curvature of <code>\(s\)</code> around that mean.
More generally, the time
<code>$$t(x)\equiv\int_0^x\frac{1}{s(y)}\,\der y$$</code>
taken to run distance <code>\(x\in[0,1]\)</code> satisfies <code>\(t(x)=1-e(x)\)</code>; that is, the proportion of time elapsed always equals the proportion of energy consumed.</p>
<p>The chart below plots <code>\(s(x)\)</code> and <code>\(t(x)\)</code> when <code>\(r=(1-ax)s(x)\)</code>.
When <code>\(a\approx0\)</code>, the energy cost of running fast is approximately constant with respect to distance and so the optimal speed profile is approximately flat.
As <code>\(a\)</code> increases, the cost of running fast increasingly falls with distance and so the optimal speed increasingly rises with distance.
Consequently, the percentage of time and energy spent on the first half of the run increases with <code>\(a\)</code>, starting at 50% when <code>\(a\approx0\)</code> and rising to 65% as <code>\(a\)</code> approaches unity.</p>
<p><img src="figures/plot-1.svg" alt=""></p>
<h2 id="extensions">Extensions</h2>
<p>One way to extend my analysis could be to <a href="https://bldavies.com/blog/optimal-pacing-random-energy-costs/">make the energy consumption rate stochastic</a>.
For example, if I run on unfamiliar terrain then I face uncertainty about upcoming obstacles (e.g., steep hills) and the energy cost of overcoming those obstacles.
This uncertainty would encourage me to start my run slowly as a form of <a href="https://en.wikipedia.org/wiki/Precautionary_savings">precautionary saving</a>, resulting in negative splits.</p>
<p>Another extension could be to model the different energy systems used when running at different speeds.
For example, short sprints use the anaerobic system, which burns carbohydrates for fuel, while long slow runs use the aerobic system, which also burns fat for fuel.
Adding more energy systems would allow for richer, more realistic dynamics, but would require more domain knowledge than I possess to set up the inter-dependencies correctly.</p>
<hr>
<p><em>Thanks to Logan Donald and Florian Fiaux for commenting on a draft version of this post.</em></p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>I formalize my pacing problem as a “continuous-time” <a href="https://en.wikipedia.org/wiki/Optimal_control">optimal control problem</a>.
I consider a discrete-time version of this problem <a href="https://bldavies.com/blog/optimal-pacing-varying-energy-costs/">here</a>. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Stable matchings with noisy preferences
https://bldavies.com/blog/stable-matchings-noisy-preferences/
Sun, 02 May 2021 00:00:00 +0000https://bldavies.com/blog/stable-matchings-noisy-preferences/<p>My <a href="https://bldavies.com/blog/stable-matchings/">previous post</a> described <a href="https://doi.org/10.2307/2312726">Gale and Shapley’s (1962)</a> algorithm for solving <a href="https://en.wikipedia.org/wiki/Stable_marriage_problem">the stable matching problem</a>.
The algorithm delivers a matching between two sets <code>\(A\)</code> and <code>\(B\)</code> of <code>\(n\)</code> people with preferences over matches in the other set.</p>
<p>The Gale-Shapley (GS) algorithm works by letting people in <code>\(A\)</code> make proposals to people in <code>\(B\)</code>, who “tentatively accept” or reject proposals until the matching market clears.
Consequently, if one side of the market is more informed about match qualities than the other side then the algorithm could generate different levels of welfare depending on which side makes proposals.</p>
<p>For example, suppose <code>\(a\in A\)</code> and <code>\(b\in B\)</code> generate surplus <code>\(S_{ab}\)</code> from being matched.
This surplus has a monetary value (representing, e.g., the price <code>\(a\)</code> and <code>\(b\)</code> would pay to be matched) so can be aggregated across pairs meaningfully.
Both <code>\(a\)</code> and <code>\(b\)</code> want the match that gives them the greatest surplus.
However, they perceive match surpluses noisily: person <code>\(a\)</code> thinks their surplus from matching with <code>\(b\)</code> is
<code>$$S_{ab}^A=S_{ab}+\epsilon_{ab}^A,$$</code>
while <code>\(b\)</code> thinks their surplus from matching with <code>\(a\)</code> is
<code>$$S_{ab}^B=S_{ab}+\epsilon_{ab}^B.$$</code>
The <code>\(S_{ab}\)</code> are iid standard normal, the <code>\(\epsilon_{ab}^A\)</code> are iid normal with mean zero and variance <code>\(\sigma_A^2\)</code>, and the <code>\(\epsilon_{ab}^B\)</code> are iid normal with mean zero and variance <code>\(\sigma_B^2\)</code>.
Increasing <code>\(\sigma_A\)</code> and <code>\(\sigma_B\)</code> increases the errors in perceived surpluses.
These errors disappear when the matching is made and the “true” surpluses <code>\(S_{ab}\)</code> (representing peoples’ true preferences) are realized.</p>
<p>I compare the distribution of mean match surpluses delivered by four matching procedures:</p>
<ol>
<li><em>MBM</em>: the <a href="https://en.wikipedia.org/wiki/Maximum_weight_matching">maximum-weight bipartite matching</a> based on the true match surpluses <code>\(S_{ab}\)</code>;</li>
<li><em>GS-A</em>: the GS algorithm with people in <code>\(A\)</code> proposing based on their perceived match surpluses <code>\(S_{ab}^A\)</code>;</li>
<li><em>GS-B</em>: the GS algorithm with people in <code>\(B\)</code> proposing based on their perceived match surpluses <code>\(S_{ab}^B\)</code>;</li>
<li><em>Feasible MBM</em>: the maximum-weight bipartite matching based on the precision-weighted mean perceived match surpluses
<code>$$\hat{S}_{ab}=\begin{cases} S_{ab} & \text{if}\ \sigma_A=0\ \text{or}\ \sigma_B=0 \\ \lambda S_{ab}^A+(1-\lambda)S_{ab}^B & \text{otherwise}, \end{cases}$$</code>
where
<code>$$\lambda=\frac{1/\sigma_A^2}{1/\sigma_A^2+1/\sigma_B^2}$$</code>
is the relative precision of <code>\(A\)</code> members’ perceptions when <code>\(\min\{\sigma_A,\sigma_B\}>0\)</code>.
<em>Feasible MBM</em> replicates <em>MBM</em> when <code>\(\min\{\sigma_A,\sigma_B\}=0\)</code>.</li>
</ol>
<p>The <em>MBM</em> procedure maximizes the sum of true match surpluses, while the <em>Feasible MBM</em> procedure maximizes the sum of the best match surplus estimates that people in <code>\(A\)</code> and <code>\(B\)</code> could obtain by communicating.
The <em>GS-A</em> and <em>GS-B</em> procedures do not allow such communication, but guarantee that the ultimate matching is stable.
I run all four procedures 1,000 times for <code>\(\sigma_A\in\{0,1,5\}\)</code> and <code>\(\sigma_B\in\{0,1,5\}\)</code>, and summarize my results in the figure below.
All four procedures deliver mean match surpluses greater than zero, implying that people tend to do better by following the procedures than by forming matches randomly.</p>
<p><img src="figures/summary-1.svg" alt=""></p>
<p>The mean match surpluses delivered by the <em>GS-A</em>, <em>GS-B</em>, and <em>Feasible MBM</em> procedures fall as <code>\(\sigma_A\)</code> and <code>\(\sigma_B\)</code> rise.
Intuitively, these three procedures rely on preferences reported by the people in <code>\(A\)</code> and <code>\(B\)</code>, and if those preferences become noisier then the procedures become worse at finding good matches.</p>
<p><em>Feasible MBM</em> tends to outperform <em>GS-A</em> and <em>GS-B</em> when <code>\(\sigma_A\)</code> or <code>\(\sigma_B\)</code> are small.
However, the performance gain is neglible when <code>\(\sigma_A\)</code> and <code>\(\sigma_B\)</code> are large.
Intuitively, if perceived match surpluses are mostly noise then sharing that noise doesn’t help with finding better matches.</p>
<p>The GS algorithm tends to find better matches when the people making proposals are the ones with less noisy preferences.
Both sides of the matching market provide information that determines the ultimate matching: the proposing side provides information <em>actively</em> through proposals, whereas the non-proposing side provides information <em>passively</em> through proposal acceptances and rejections.
Letting the more-informed side make proposals allows more information to feed into the matching process, leading to better matches on average.</p>
<hr>
<p><em>Thanks to <a href="https://www.spantoja.com">Spencer Pantoja</a> for inspiring this post and to <a href="https://web.stanford.edu/~alroth/">Al Roth</a> for his comments.</em></p>
Stable matchings
https://bldavies.com/blog/stable-matchings/
Mon, 19 Apr 2021 00:00:00 +0000https://bldavies.com/blog/stable-matchings/<p>Let <code>\(A\)</code> and <code>\(B\)</code> be sets of <code>\(n\)</code> people.
A “<a href="https://en.wikipedia.org/wiki/Matching_%28graph_theory%29">matching</a>” is a collection of pairs <code>\((a,b)\)</code> with <code>\(a\in A\)</code> and <code>\(b\in B\)</code> such that everyone in <code>\(A\cup B\)</code> belongs to exactly one pair.
For example, if <code>\(A\)</code> and <code>\(B\)</code> are sets of men and women then a matching could define a collection of monogamous, heterosexual marriages.</p>
<p>Suppose the people in each set have (complete, strict) preferences over potential matches in the other set.
A matching is “<a href="https://en.wikipedia.org/wiki/Stable_marriage_problem">stable</a>” if there are no unmatched pairs who prefer each other to their match.
<a href="https://doi.org/10.2307/2312726">Gale and Shapley (1962)</a> show that a stable matching always exists and describe an <a href="https://en.wikipedia.org/wiki/Gale%E2%80%93Shapley_algorithm">algorithm</a> for finding it:
Let each person <code>\(a\in A\)</code> without a match “propose” to their most preferred person in <code>\(b\in B\)</code> to whom they haven’t already proposed.
If <code>\(b\)</code> is unmatched then they tentatively accept the proposal;
if <code>\(b\)</code> is matched to <code>\(a'\)</code> but prefers <code>\(a\)</code> then they tentatively accept the proposal and reject <code>\(a'\)</code>;
otherwise, <code>\(b\)</code> rejects the proposal.
Repeat this process until everyone is matched.</p>
<h2 id="optimality-and-strategy-proofness">Optimality and strategy-proofness</h2>
<p>The Gale-Shapley (GS) algorithm always delivers a stable matching that is best for everyone in <code>\(A\)</code> among all stable matchings.
To see why, suppose <code>\(a\in A\)</code> is matched to <code>\(b\in B\)</code> but prefers <code>\(b'\in B\setminus\{b\}\)</code>.
Then <code>\(b'\)</code> must have received a proposal from some <code>\(a'\in A\setminus\{a\}\)</code> whom they prefer to <code>\(a\)</code>.
Consequently, <code>\(a\)</code> cannot form a “blocking pair” with <code>\(b'\)</code> (and thereby break the stable matching) because <code>\(b'\)</code> would rather be matched to <code>\(a'\)</code>.
Thus <code>\(b\)</code> is the best match <code>\(a\)</code> can get if the matching is stable.</p>
<p>On the other hand, the GS algorithm always delivers a stable matching that is <em>worst</em> for everyone in <code>\(B\)</code> among all stable matchings.
To see why, suppose <code>\(b\in B\)</code> is matched to <code>\(a\in A\)</code> in some matching <code>\(\mathcal{M}\)</code> obtained using the GS algorithm.
Suppose further that <code>\(b\)</code> prefers <code>\(a\)</code> to some <code>\(a'\in A\setminus\{a\}\)</code> and assume towards a contradiction that there is a stable matching <code>\(\mathcal{M}'\)</code> in which <code>\(b\)</code> is matched to <code>\(a'\)</code>.
Then <code>\(a\)</code> is matched to some <code>\(b'\in B\setminus\{b\}\)</code> in <code>\(\mathcal{M}'\)</code>.
Now <code>\(\mathcal{M}\)</code> was obtained using the GS algorithm, so it gives <code>\(a\)</code> their top preference among all stable matchings.
Consequently, <code>\(a\)</code> must prefer <code>\(b\)</code> to <code>\(b'\)</code>.
But then <code>\(a\)</code> and <code>\(b\)</code> form a blocking pair in <code>\(\mathcal{M}'\)</code>, contradicting its stability.
Thus <code>\(a'\)</code> cannot exist; that is, <code>\(a\)</code> is the worst match <code>\(b\)</code> can get among all stable matchings.</p>
<p>The GS algorithm is <a href="https://en.wikipedia.org/wiki/Strategyproofness">strategy-proof</a> for everyone in <code>\(A\)</code>: no-one in <code>\(A\)</code> can do better by misreporting their preferences (<a href="https://doi.org/10.1287/moor.7.4.617">Roth, 1982</a>), nor can any subset of <code>\(A\)</code> coordinate to do (strictly) better (<a href="https://doi.org/10.2307/2321753">Dubins and Freedman, 1981</a>).
However, people in <code>\(B\)</code> may be able to do better.
For example, suppose the preferences among people in <code>\(A=\{a_1,a_2,a_3\}\)</code> and <code>\(B=\{b_1,b_2,b_3\}\)</code> are given by
<code>$$\begin{align*} b_2&\succ_{a_1}b_1\succ_{a_1}b_3 \\ b_1&\succ_{a_2}b_2\succ_{a_2}b_3 \\ b_1&\succ_{a_3}b_2\succ_{a_3}b_3 \\ a_1&\succ_{b_1}a_3\succ_{b_1}a_2 \\ a_3&\succ_{b_2}a_1\succ_{b_2}a_2 \\ a_1&\succ_{b_3}a_2\succ_{b_3}a_3, \end{align*}$$</code>
where <code>\(j\succ_ik\)</code> means that <code>\(i\)</code> prefers <code>\(j\)</code> to <code>\(k\)</code>.
Applying the GS algorithm to these preferences delivers the stable matching <code>\(\{(a_1,b_2),(a_2,b_3),(a_3,b_1)\}\)</code>.
But if <code>\(b_1\)</code> misreported their preferences as <code>\(a_1\succ_{b_1}a_2\succ_{b_1}a_3\)</code> then the algorithm would deliver <code>\(\{(a_1,b_1),(a_2,b_3),(a_3,b_2)\}\)</code>, which <code>\(b_1\)</code> prefers.</p>
<h2 id="convergence">Convergence</h2>
<p>Since everyone in <code>\(A\)</code> proposes to everyone in <code>\(B\)</code> at most once, the GS algorithm never requires more than <code>\(n^2\)</code> proposals.
However, the algorithm typically requires fewer proposals.
For example, suppose the utility <code>\(a\in A\)</code> derives from being matched to <code>\(b\in B\)</code> is
<code>$$U_{ab}=\rho W_b+(1-\rho)X_{ab},$$</code>
where <code>\(W_b\)</code> and <code>\(X_{ab}\)</code> are iid uniformly distributed on the unit interval <code>\([0,1]\)</code>, and where <code>\(\rho\)</code> indexes the correlation of match utilities.
Similarly, suppose the utility <code>\(b\)</code> derives from being matched to <code>\(a\)</code> is
<code>$$V_{ba}=\rho Y_a+(1-\rho)Z_{ba},$$</code>
where <code>\(Y_a\)</code> and <code>\(Z_{ba}\)</code> are also iid uniform on <code>\([0,1]\)</code>.
The utilities <code>\(U_{ab}\)</code> and <code>\(V_{ba}\)</code> determine peoples’ preferences over matches, and increasing <code>\(\rho\)</code> makes those preferences more homogeneous.
The chart below shows how the number of proposals required by the GS algorithm covaries with <code>\(\rho\)</code> when <code>\(n=20\)</code>.</p>
<p><img src="figures/plot-1.svg" alt=""></p>
<p>On average, more proposals are required when preferences are more homogeneous.
Intuitively, increasing <code>\(\rho\)</code> makes it more likely that an early tentative acceptance will become a rejection, forcing the rejected person to make another proposal.
If <code>\(\rho=1\)</code> then the GS algorithm always requires
<code>$$\sum_{x=1}^nx=\frac{n(n+1)}{2}$$</code>
proposals.
To see why, notice that if <code>\(\rho=1\)</code> then everyone in <code>\(A\)</code> has the same preferences over everyone in <code>\(B\)</code> and vice versa.
Consequently, the person in <code>\(A\)</code> most preferred by the people in <code>\(B\)</code> always gets their first choice, the person in <code>\(A\)</code> second-most preferred by the people in <code>\(B\)</code> always gets their second choice, and so on.
But since everyone in <code>\(A\)</code> has the same preferences, each has to make as many proposals as is their position on the (common) preference ordering among the people in <code>\(B\)</code>.</p>
<h2 id="limitations">Limitations</h2>
<p>One limitation of the GS algorithm is that it assumes everyone has strict, complete preferences over potential matches.
This assumption may not hold in practice: <code>\(a\in A\)</code> could be indifferent between <code>\(b\in B\)</code> and <code>\(b'\in B\)</code>, or <code>\(a\)</code> may not even know who is in <code>\(B\)</code> let alone the utilities derived from being matched to them.
<a href="https://doi.org/10.1016/0166-218X%2892%2900179-P">Irving (1994)</a> generalizes the GS algorithm to handle situations with indifferences, while <a href="https://doi.org/10.1016/S0304-3975%2801%2900206-7">Manlove et al. (2002)</a> describe the computational complexity generated by allowing for incomplete preferences.</p>
<p>Another limitation of the GS algorithm is that it always delivers a stable matching that is best for people in <code>\(A\)</code> and worst for people in <code>\(B\)</code>.
This “extremal” property of the algorithm’s output motivates alternative algorithms (e.g., those by <a href="https://doi.org/10.2307/2938326">Roth and Vande Vate (1990)</a> and <a href="https://doi.org/10.1007/s11238-005-6846-0">Romero-Medina (2005)</a>, and more recently <a href="https://doi.org/10.1287/opre.2020.2042">Dworczak (2021)</a> and <a href="https://ideas.repec.org/p/cte/werepe/31711.html">Kuvalekar and Romero-Medina (2021)</a>) that deliver <em>ex ante</em> fairer matchings by randomizing whose preferences (i.e., people in <code>\(A\)</code> or people in <code>\(B\)</code>) are used to form matches.</p>
<p>A third limitation is that the GS algorithm assumes match utilities do not depend on the sequence of proposals.
In particular, the algorithm assumes that <code>\(a\in A\)</code> derives the same utility from being matched to <code>\(b\in B\)</code> regardless of how much <code>\(b\)</code> wants to be matched to <code>\(a\)</code>.
This assumption seems unrealistic: if I proposed to someone but later learned I was <em>the last person</em> they wanted to marry then that lesson would surely affect my comfort with the proposal.
One way to resolve this issue could be to run the algorithm many times, allowing people to revise their preferences at each run based on the matching obtained in the previous run.
However, this approach could be expensive—computationally, cognitively, and emotionally—and might not converge if peoples’ preference revisions aren’t well-behaved.</p>
Female representation and collaboration at the NBER
https://bldavies.com/blog/female-representation-collaboration-nber/
Mon, 29 Mar 2021 00:00:00 +0000https://bldavies.com/blog/female-representation-collaboration-nber/<p>This post analyzes the <a href="#representation-across-research-programs">representation</a> of, and <a href="#co-authorship-patterns">collaboration</a> among, female authors of <a href="https://www.nber.org/papers">NBER working papers</a> over the last four decades.
My analysis uses paper-author correspondences provided by the R package <a href="https://github.com/bldavies/nberwp">nberwp</a>.</p>
<h2 id="estimating-sexes">Estimating sexes</h2>
<p>I estimate authors’ sexes using the R package <a href="https://cran.r-project.org/package=gender">gender</a>, which provides access to historical baby name data from the US Social Security Administration.
I focus on baby names between 1940 and 1995 because these roughly correspond to (what I expect are) the birth years of authors who published NBER working papers during the 1980s through 2010s.</p>
<p>Comparing authors’ first names to the frequency of female and male baby names allows me to estimate the probability that each author is female.
For example, 3% of babies named Alex between 1940 and 1995 were female, so the estimated probability that an author named Alex is female is 0.03.
Rounding each probability to the nearest integer estimates the binary indicator variable for whether each author is female.</p>
<p>The table below reports the number of NBER working papers and authors during the 1980s, 1990s, 2000s, and 2010s.
It also reports the percentage of those authors whom I estimate to be female, as well as the percentage of authors whose sexes I can estimate.
The number of authors roughly doubled each decade, and the percentage of those authors whom I estimate to be female almost doubled between the 1980s and 2010s.</p>
<table>
<thead>
<tr>
<th align="center">Decade</th>
<th align="center">Papers</th>
<th align="center">Authors</th>
<th align="center">% authors female</th>
<th align="center">% authors with estimable sex</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center">1980s</td>
<td align="center">2,820</td>
<td align="center">972</td>
<td align="center">14.1</td>
<td align="center">93.9</td>
</tr>
<tr>
<td align="center">1990s</td>
<td align="center">4,213</td>
<td align="center">2,211</td>
<td align="center">19.7</td>
<td align="center">88.3</td>
</tr>
<tr>
<td align="center">2000s</td>
<td align="center">8,188</td>
<td align="center">5,118</td>
<td align="center">24.0</td>
<td align="center">85.5</td>
</tr>
<tr>
<td align="center">2010s</td>
<td align="center">10,970</td>
<td align="center">9,519</td>
<td align="center">27.0</td>
<td align="center">84.1</td>
</tr>
</tbody>
</table>
<p>The percentage of authors with estimable sex is less than 100% because some authors (i) never listed their first names on their papers’ bylines (e.g., always published as “J. Smith”) or (ii) have first names that do not appear in the baby name data.
Throughout this post, I assume that conditions (i) and (ii) occur at the same rate for both sexes.
Almost all (99.9%) of the authors satisfying either condition satisfy (ii) because they have foreign names.
The decrease in sex estimability over time reflects the increase in (co-)authorship of NBER working papers by researchers born outside the United States.</p>
<h2 id="representation-across-research-programs">Representation across research programs</h2>
<p>The NBER organizes its research into <a href="https://www.nber.org/programs-projects/programs-working-groups">programs</a>, each of which “corresponds loosely to a traditional field of study within economics.”
I count the papers associated with each program in <a href="#appendix">the appendix below</a>.
The largest programs are Labor Studies, Economic Fluctuations and Growth, and Public Economics, reflecting the NBER’s focus on policy-relevant economic research.</p>
<p>The table below reports the percentage of authors whom I estimate to be female in each of the NBER’s ten largest research programs.
I pool the remaining eleven programs into an “Other” program and report separate percentages for each decade.
The percentage of female authors grew over time, both overall and within each of the tabulated programs, and was larger in programs that are relatively focused on individual-level outcomes (e.g., Labor Studies and Health Economics).
I omit the percentages for Asset Pricing and Corporate Finance in the 1980s because there was only one paper associated with those programs during that decade.</p>
<table>
<thead>
<tr>
<th align="left">Program</th>
<th align="right">1980s</th>
<th align="right">1990s</th>
<th align="right">2000s</th>
<th align="right">2010s</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Labor Studies (LS)</td>
<td align="right">19.1</td>
<td align="right">26.2</td>
<td align="right">27.1</td>
<td align="right">29.9</td>
</tr>
<tr>
<td align="left">Economic Fluctuations and Growth (EFG)</td>
<td align="right">5.9</td>
<td align="right">9.2</td>
<td align="right">17.4</td>
<td align="right">18.9</td>
</tr>
<tr>
<td align="left">Public Economics (PE)</td>
<td align="right">8.5</td>
<td align="right">16.7</td>
<td align="right">21.9</td>
<td align="right">26.3</td>
</tr>
<tr>
<td align="left">International Finance and Macroeconomics (IFM)</td>
<td align="right">14.5</td>
<td align="right">13.8</td>
<td align="right">16.5</td>
<td align="right">20.7</td>
</tr>
<tr>
<td align="left">International Trade and Investment (ITI)</td>
<td align="right">14.7</td>
<td align="right">15.9</td>
<td align="right">23.6</td>
<td align="right">23.2</td>
</tr>
<tr>
<td align="left">Monetary Economics (ME)</td>
<td align="right">5.3</td>
<td align="right">11.8</td>
<td align="right">13.9</td>
<td align="right">17.5</td>
</tr>
<tr>
<td align="left">Asset Pricing (AP)</td>
<td align="right">-</td>
<td align="right">10.7</td>
<td align="right">16.5</td>
<td align="right">18.1</td>
</tr>
<tr>
<td align="left">Productivity, Innovation, and Entrepreneurship (PR)</td>
<td align="right">15.4</td>
<td align="right">23.4</td>
<td align="right">22.2</td>
<td align="right">24.1</td>
</tr>
<tr>
<td align="left">Corporate Finance (CF)</td>
<td align="right">-</td>
<td align="right">12.9</td>
<td align="right">22.4</td>
<td align="right">20.6</td>
</tr>
<tr>
<td align="left">Health Economics (HE)</td>
<td align="right">20.4</td>
<td align="right">23.3</td>
<td align="right">33.9</td>
<td align="right">33.5</td>
</tr>
<tr>
<td align="left">Other</td>
<td align="right">11.2</td>
<td align="right">22.7</td>
<td align="right">26.4</td>
<td align="right">28.4</td>
</tr>
<tr>
<td align="left">All</td>
<td align="right">14.1</td>
<td align="right">19.7</td>
<td align="right">24.0</td>
<td align="right">27.0</td>
</tr>
</tbody>
</table>
<p>Another way to analyze female representation is to compare the density of female-authored working papers across programs.
I present this comparison in the chart below, focusing on papers published during the 2010s.
The horizontal axis measures the percentage of working papers published by female authors in each program.
I compute these percentages by counting papers “fractionally” so that, for example, papers with two authors and three associated programs contribute a sixth of a paper to the count for each author-program pair.
This method avoids double-counting papers across programs and sexes.
Aggregating fractional counts by program and sex allows me to estimate the percentage of working papers published in each program by female authors.
I order programs by percentage of female authorship and color them according to a categorization based on that used by <a href="https://www.nber.org/papers/w23953">Chari and Goldsmith-Pinkham (2017)</a>.</p>
<p><img src="figures/female-authorships-1.svg" alt=""></p>
<p>Overall, females wrote about 21% of the working papers published during the 2010s.
These papers were relatively concentrated among programs focused on applied microeconomics rather than on macroeconomics or finance.
These patterns echo those presented by Chari and Goldsmith-Pinkham (2017), and could reflect differences in academic culture between different branches of economics (see, e.g., <a href="https://www.nber.org/papers/w28494">Dupas et al., 2021</a>).</p>
<h2 id="co-authorship-patterns">Co-authorship patterns</h2>
<p>I infer the collaboration patterns among NBER authors from the working paper co-authorship network for each decade.
In each network, nodes correspond to authors who published at least one working paper during that decade, and edges join authors who co-authored at least one working paper during that decade.
The table below summarizes each network.
The networks grew larger and less dense over time, while the rise in mean degree—that is, the mean number of co-authors—reflects the rise in co-authorship among economists documented in other studies (e.g., <a href="https://doi.org/10.1080/13504851.2015.1119783">Rath and Wohlrabe, 2017</a>).</p>
<table>
<thead>
<tr>
<th align="center">Decade</th>
<th align="center">Nodes</th>
<th align="center">Edges</th>
<th align="center">Edge density (%)</th>
<th align="center">Mean degree</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center">1980s</td>
<td align="center">972</td>
<td align="center">1,197</td>
<td align="center">0.25</td>
<td align="center">2.46</td>
</tr>
<tr>
<td align="center">1990s</td>
<td align="center">2,211</td>
<td align="center">3,062</td>
<td align="center">0.13</td>
<td align="center">2.77</td>
</tr>
<tr>
<td align="center">2000s</td>
<td align="center">5,118</td>
<td align="center">8,890</td>
<td align="center">0.07</td>
<td align="center">3.47</td>
</tr>
<tr>
<td align="center">2010s</td>
<td align="center">9,519</td>
<td align="center">21,455</td>
<td align="center">0.05</td>
<td align="center">4.51</td>
</tr>
</tbody>
</table>
<p>The figure below compares the co-authorship network degree distributions for each sex.
Females tended to have fewer co-authors than males, but the mean difference was small and fell over time (from 0.78 during the 1980s to 0.66 during the 2010s).</p>
<p><img src="figures/degree-distributions-1.svg" alt=""></p>
<p>The next three tables describe structural properties of each decade’s co-authorship network based on authors’ estimated sexes.
These properties may be sensitive to estimation errors.
Therefore, rather than report point estimates for each property, I report 95% confidence intervals obtained using the following bootstrap procedure:</p>
<ol>
<li>Randomly assign each author to be female according to the probabilities obtained from the baby name data.</li>
<li>Compute each structural property under the randomized assignment.</li>
<li>Repeat the preceding two steps 1,000 times to obtain bootstrap distributions of each property.</li>
<li>Use the 2.5% and 97.5% quantiles of the bootstrap distributions as the lower and upper confidence bounds.</li>
</ol>
<p>The first property I examine is the <a href="https://en.wikipedia.org/wiki/Clustering_coefficient">clustering coefficient</a>: the probability that two authors were co-authors given that they shared a common co-author.
The table below compares the clustering coefficient of the full co-authorship network in each decade with the clustering coefficient of the sub-networks <a href="https://en.wikipedia.org/wiki/Induced_subgraph">induced</a> by the sets of authors whom I estimate to be female and male.</p>
<table>
<thead>
<tr>
<th align="left">Clustering coefficient</th>
<th align="center">1980s</th>
<th align="center">1990s</th>
<th align="center">2000s</th>
<th align="center">2010s</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Overall</td>
<td align="center">0.17</td>
<td align="center">0.18</td>
<td align="center">0.21</td>
<td align="center">0.24</td>
</tr>
<tr>
<td align="left">Among females (95% CI)</td>
<td align="center">(0.39, 0.50)</td>
<td align="center">(0.41, 0.50)</td>
<td align="center">(0.30, 0.35)</td>
<td align="center">(0.32, 0.35)</td>
</tr>
<tr>
<td align="left">Among males (95% CI)</td>
<td align="center">(0.16, 0.17)</td>
<td align="center">(0.17, 0.17)</td>
<td align="center">(0.20, 0.21)</td>
<td align="center">(0.23, 0.23)</td>
</tr>
</tbody>
</table>
<p>The female sub-networks were much more clustered than the full and male networks.
Such clustering suggests a stronger tendency among females to <a href="https://bldavies.com/blog/triadic-closure-nber">close triads</a> by collaborating with other females with whom they share a common (female) co-author.
The decline in clustering among females over time could reflect the rise in between-sex co-authorship: the percentage of co-authored papers with at least one author of each sex was about 16% in the 1980s, and rose to 25%, 35%, and 42% in the subsequent three decades.</p>
<p>The next property I examine is the <a href="https://bldavies.com/blog/assortative-mixing">assortativity coefficient</a>, which measures the extent to which authors tended to co-author with members of the same sex.
The coefficient equals 1 when there is perfect sorting (i.e., no between-sex edges), −1 when there is perfect dis-sorting (i.e., no within-sex edges), and 0 when there is no sorting (i.e., the network is “as random”).
The table below shows that each network’s assortativity coefficient was positive, implying that within-sex co-authorship was more common than we would expect if co-authorships were random.</p>
<table>
<thead>
<tr>
<th align="center">Decade</th>
<th align="center">Assort. coeff. (95% CI)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center">1980s</td>
<td align="center">(0.05, 0.09)</td>
</tr>
<tr>
<td align="center">1990s</td>
<td align="center">(0.08, 0.11)</td>
</tr>
<tr>
<td align="center">2000s</td>
<td align="center">(0.07, 0.09)</td>
</tr>
<tr>
<td align="center">2010s</td>
<td align="center">(0.08, 0.10)</td>
</tr>
</tbody>
</table>
<p>Computing assortativity coefficients across all programs may mask program-specific patterns.
I explore these patterns in my final table below, which reports 95% confidence intervals for the assortativity coefficient of the co-authorship network within each of the NBER’s ten largest research programs.
I label programs by their abbreviations so that the table is not too wide.</p>
<table>
<thead>
<tr>
<th align="center">Program</th>
<th align="center">1980s</th>
<th align="center">1990s</th>
<th align="center">2000s</th>
<th align="center">2010s</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center">LS</td>
<td align="center">(0.16, 0.27)</td>
<td align="center">(0.13, 0.19)</td>
<td align="center">(0.05, 0.09)</td>
<td align="center">(0.09, 0.11)</td>
</tr>
<tr>
<td align="center">EFG</td>
<td align="center">(-0.07, 0.08)</td>
<td align="center">(-0.07, 0.01)</td>
<td align="center">(-0.02, 0.02)</td>
<td align="center">(0.02, 0.06)</td>
</tr>
<tr>
<td align="center">PE</td>
<td align="center">(0.04, 0.14)</td>
<td align="center">(-0.01, 0.05)</td>
<td align="center">(0.03, 0.07)</td>
<td align="center">(0.05, 0.07)</td>
</tr>
<tr>
<td align="center">IFM</td>
<td align="center">(-0.05, 0.04)</td>
<td align="center">(-0.01, 0.08)</td>
<td align="center">(-0.01, 0.05)</td>
<td align="center">(0.03, 0.08)</td>
</tr>
<tr>
<td align="center">ITI</td>
<td align="center">(-0.06, 0.04)</td>
<td align="center">(0.01, 0.09)</td>
<td align="center">(0.00, 0.07)</td>
<td align="center">(0.05, 0.10)</td>
</tr>
<tr>
<td align="center">ME</td>
<td align="center">(-0.07, 0.04)</td>
<td align="center">(-0.03, 0.06)</td>
<td align="center">(-0.10, -0.03)</td>
<td align="center">(0.03, 0.09)</td>
</tr>
<tr>
<td align="center">AP</td>
<td align="center">-</td>
<td align="center">(-0.06, 0.07)</td>
<td align="center">(-0.01, 0.05)</td>
<td align="center">(0.00, 0.05)</td>
</tr>
<tr>
<td align="center">PR</td>
<td align="center">(-0.15, -0.01)</td>
<td align="center">(0.12, 0.22)</td>
<td align="center">(0.02, 0.09)</td>
<td align="center">(0.07, 0.11)</td>
</tr>
<tr>
<td align="center">CF</td>
<td align="center">-</td>
<td align="center">(-0.04, 0.07)</td>
<td align="center">(-0.03, 0.04)</td>
<td align="center">(0.03, 0.09)</td>
</tr>
<tr>
<td align="center">HE</td>
<td align="center">(-0.14, -0.01)</td>
<td align="center">(0.01, 0.09)</td>
<td align="center">(0.01, 0.05)</td>
<td align="center">(0.07, 0.10)</td>
</tr>
<tr>
<td align="center">All</td>
<td align="center">(0.05, 0.09)</td>
<td align="center">(0.08, 0.11)</td>
<td align="center">(0.07, 0.09)</td>
<td align="center">(0.08, 0.10)</td>
</tr>
</tbody>
</table>
<p>The network among authors in the Labor Studies (LS) program became less sorted over time, whereas the network among authors in the Health Economics (HE) program became more sorted over time.
But the representation of women in both of those programs grew over time, suggesting that the mechanisms promoting female representation were different than the mechanisms promoting female collaboration.
It would be interesting to explore these mechanisms further, but I’ll leave that for a future post.</p>
<h2 id="acknowledgements">Acknowledgements</h2>
<p>Thanks to <a href="https://adhami.people.stanford.edu">Mohamad Adhami</a>, <a href="https://fhnilo.sites.stanford.edu">Florencia Hnilo</a> and Akhila Kovvuri for reading draft versions of this post.</p>
<h2 id="appendix">Appendix</h2>
<p>The table below (fractionally) counts working papers by program and decade.
I present programs in decreasing order of associated papers across all four decades.</p>
<table>
<thead>
<tr>
<th align="left">Program</th>
<th align="right">1980s</th>
<th align="right">1990s</th>
<th align="right">2000s</th>
<th align="right">2010s</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Labor Studies (LS)</td>
<td align="right">454</td>
<td align="right">635</td>
<td align="right">868</td>
<td align="right">1,081</td>
</tr>
<tr>
<td align="left">Economic Fluctuations and Growth (EFG)</td>
<td align="right">458</td>
<td align="right">471</td>
<td align="right">921</td>
<td align="right">1,083</td>
</tr>
<tr>
<td align="left">Public Economics (PE)</td>
<td align="right">445</td>
<td align="right">557</td>
<td align="right">827</td>
<td align="right">993</td>
</tr>
<tr>
<td align="left">International Finance and Macroeconomics (IFM)</td>
<td align="right">374</td>
<td align="right">466</td>
<td align="right">731</td>
<td align="right">662</td>
</tr>
<tr>
<td align="left">International Trade and Investment (ITI)</td>
<td align="right">370</td>
<td align="right">517</td>
<td align="right">631</td>
<td align="right">525</td>
</tr>
<tr>
<td align="left">Monetary Economics (ME)</td>
<td align="right">418</td>
<td align="right">327</td>
<td align="right">389</td>
<td align="right">514</td>
</tr>
<tr>
<td align="left">Asset Pricing (AP)</td>
<td align="right">0</td>
<td align="right">221</td>
<td align="right">610</td>
<td align="right">627</td>
</tr>
<tr>
<td align="left">Productivity, Innovation, and Entrepreneurship (PR)</td>
<td align="right">96</td>
<td align="right">231</td>
<td align="right">371</td>
<td align="right">563</td>
</tr>
<tr>
<td align="left">Corporate Finance (CF)</td>
<td align="right">1</td>
<td align="right">131</td>
<td align="right">436</td>
<td align="right">560</td>
</tr>
<tr>
<td align="left">Health Economics (HE)</td>
<td align="right">82</td>
<td align="right">115</td>
<td align="right">355</td>
<td align="right">497</td>
</tr>
<tr>
<td align="left">Development of the American Economy (DAE)</td>
<td align="right">45</td>
<td align="right">78</td>
<td align="right">311</td>
<td align="right">379</td>
</tr>
<tr>
<td align="left">Industrial Organization (IO)</td>
<td align="right">0</td>
<td align="right">82</td>
<td align="right">300</td>
<td align="right">432</td>
</tr>
<tr>
<td align="left">Economics of Aging (AG)</td>
<td align="right">33</td>
<td align="right">126</td>
<td align="right">237</td>
<td align="right">340</td>
</tr>
<tr>
<td align="left">Health Care (HC)</td>
<td align="right">0</td>
<td align="right">100</td>
<td align="right">248</td>
<td align="right">316</td>
</tr>
<tr>
<td align="left">Environment and Energy Economics (EEE)</td>
<td align="right">1</td>
<td align="right">6</td>
<td align="right">138</td>
<td align="right">483</td>
</tr>
<tr>
<td align="left">Economics of Education (ED)</td>
<td align="right">0</td>
<td align="right">1</td>
<td align="right">209</td>
<td align="right">411</td>
</tr>
<tr>
<td align="left">Children (CH)</td>
<td align="right">2</td>
<td align="right">35</td>
<td align="right">246</td>
<td align="right">297</td>
</tr>
<tr>
<td align="left">Political Economics (POL)</td>
<td align="right">0</td>
<td align="right">0</td>
<td align="right">141</td>
<td align="right">415</td>
</tr>
<tr>
<td align="left">Law and Economics (LE)</td>
<td align="right">20</td>
<td align="right">57</td>
<td align="right">188</td>
<td align="right">231</td>
</tr>
<tr>
<td align="left">Development Economics (DEV)</td>
<td align="right">0</td>
<td align="right">0</td>
<td align="right">0</td>
<td align="right">462</td>
</tr>
<tr>
<td align="left">Technical Working Papers (TWP)</td>
<td align="right">0</td>
<td align="right">0</td>
<td align="right">25</td>
<td align="right">95</td>
</tr>
<tr>
<td align="left">None</td>
<td align="right">24</td>
<td align="right">58</td>
<td align="right">7</td>
<td align="right">0</td>
</tr>
<tr>
<td align="left">Total</td>
<td align="right">2,820</td>
<td align="right">4,213</td>
<td align="right">8,188</td>
<td align="right">10,970</td>
</tr>
</tbody>
</table>
Monopoly equilibrium in insurance markets
https://bldavies.com/blog/monopoly-equilibrium-insurance-markets/
Fri, 19 Feb 2021 00:00:00 +0000https://bldavies.com/blog/monopoly-equilibrium-insurance-markets/<p>This post shows how monopoly insurance pricing can lead to inefficient risk sharing.
I describe <a href="#model">a mathematical model</a> of the monopoly equilibrium, present <a href="#numerical-example">a numerical example</a>, and discuss <a href="#limitations">some limitations</a> of my analysis.</p>
<h2 id="model">Model</h2>
<p>Suppose I have initial wealth <code>\(w_0\)</code> and suffer a loss of size <code>\(L\)</code> with probability <code>\(p\)</code>.
I can buy <code>\(c\in[0,L]\)</code> units of insurance coverage at per-unit price <code>\(\lambda p\)</code>, where <code>\(\lambda\ge1\)</code> is a loading factor set by my insurer.
I choose the amount of coverage <code>\(c^*\)</code> that maximizes my expected utility
<code>$$EU(c)\equiv(1-p)u(w_0-\lambda p c)+pu(w_0-\lambda pc-L+c),$$</code>
where
<code>$$u(w)\equiv-\frac{1}{a}\exp(-aw)$$</code>
is my utility function and <code>\(a>0\)</code> is my <a href="https://en.wikipedia.org/wiki/Risk_aversion#Absolute_risk_aversion">coefficient of absolute risk aversion</a>.
Solving the first-order condition for <code>\(c^*\)</code> gives
<code>$$c^*=L-\frac{1}{a}\log\left(\frac{\lambda(1-p)}{1-\lambda p}\right),$$</code>
which equals <code>\(L\)</code> when <code>\(\lambda=1\)</code> (i.e, the premium is actuarially fair) and equals zero when <code>\(\lambda\)</code> equals
<code>$$\lambda_{\text{max}}=\frac{1}{p+(1-p)\exp(-aL)}.$$</code>
This limiting value of <code>\(\lambda\)</code> approaches one as <code>\(aL\)</code> approaches zero—I won’t buy insurance if I am risk neutral or face no risk—and is always less than <code>\(1/p\)</code>.
For <code>\(\lambda\in(1,\lambda_{\text{max}})\)</code>, the slope
<code>$$\newcommand{\parfrac}[2]{\frac{\partial #1}{\partial #2}} \parfrac{c^*}{\lambda}=-\frac{1}{a\lambda(1-\lambda p)}$$</code>
of my inverse demand curve is strictly decreasing, implying that I view insurance as an ordinary good.</p>
<p>Now suppose my insurer knows my demand for coverage <code>\(c^*\equiv C(\lambda)\)</code> given the loading factor <code>\(\lambda\)</code>, as well as the other parameters in my choice environment.
Then they can choose <code>\(\lambda\)</code> to maximize their expected profit
<code>$$\pi(\lambda)\equiv(\lambda-1)pC(\lambda),$$</code>
which equals the premium I pay minus the expected cost of indemnifying me.
If <code>\(L>0\)</code> then the profit-maximizing loading factor <code>\(\lambda^*\)</code> is strictly between one and <code>\(\lambda_{\text{max}}\)</code>, and setting <code>\(\lambda=\lambda^*\)</code> gives my insurer positive expected profit.
But then I demand partial coverage <code>\(C(\lambda^*)<L\)</code>, which is allocatively inefficient because I am risk averse but my insurer is risk neutral: having the insurer bear more of my risk would make me better off but my insurer no worse off.
Consequently, we suffer a deadweight loss relative to the equilibrium in which my insurer sets <code>\(\lambda=1\)</code>, I demand full coverage, and my insurer bears all of my risk.</p>
<h2 id="numerical-example">Numerical example</h2>
<p>The figure below describes the monopoly equilibrium when <code>\(w_0=100\)</code>, <code>\(L=20\)</code>, <code>\(p=0.2\)</code>, and <code>\(a=0.2\)</code>.
My insurer best-responds to my demand schedule (the downward-sloping curve) by setting the loading factor equal to <code>\(\lambda^*=3.26\)</code>, which earns them expected profit <code>\(\pi=4.49\)</code>.
At the price <code>\(\lambda^* p=0.65\)</code>, I buy <code>\(c^*=9.94\)</code> units of coverage and enjoy
<code>$$p\int_{\lambda^*}^{\lambda_{\text{max}}}C(\lambda)\,\mathrm{d}\lambda=1.68$$</code></p>
<p>units of consumer surplus.
In contrast, at the actuarially fair price <code>\(p\)</code> I would have bought full coverage, and although my insurer would have made zero expected profit we would have avoided the deadweight loss of 2.14 generated by our inefficient risk-sharing arrangement at the monopoly equilibrium.</p>
<p><img src="figures/example-1.svg" alt=""></p>
<p>One way to make sense of these numbers is to compute the certainty-equivalent wealth
<code>$$CE(\lambda)=u^{-1}(EU(C(\lambda)))$$</code>
that, if held with certainty, would give me as much utility as I expect to enjoy if I buy <code>\(C(\lambda)\)</code> units of coverage at per-unit price <code>\(\lambda p\)</code>.
Buying insurance at the monopoly equilibrium price raises my certainty equivalent wealth by <code>\(CE(\lambda^*)-CE(\lambda_{\text{max}})=1.68\)</code>, the consumer surplus I enjoy at that equilibrium.
Making the premium actuarially fair would further raise my certainty-equivalent wealth by <code>\(CE(1)-CE(\lambda^*)=6.63\)</code> but lower my insurer’s expected profit by <code>\(\pi(\lambda^*)=4.49\)</code>; the sum of our surpluses would rise by <code>\(6.63-4.49=2.14\)</code>, the deadweight loss at the monopoly equilibrium.</p>
<p>The chart below presents some comparative statics of the monopoly equilibrium.
I maintain the parameters <code>\(w_0=100\)</code> and <code>\(L=20\)</code> from above, but vary my risk aversion coefficient <code>\(a\)</code> and the probability <code>\(p\)</code> with which I incur the loss.</p>
<p><img src="figures/equilibria-1.svg" alt=""></p>
<p>My insurer sets a higher loading factor and earns more profit when my risk aversion rises.
This is because the mixed partial derivative
<code>$$\parfrac{^2c^*}{\lambda\partial a}=\frac{1}{a^2\lambda(1-\lambda p)}$$</code>
is strictly positive, which means that my demand is less sensitive to price changes when <code>\(a\)</code> is high.
My insurer exploits this lower sensitivity by charging me higher prices.
When <code>\(a\)</code> is small, this exploitation moves us away from the actuarially fair equilibrium and so raises the deadweight loss; when <code>\(a\)</code> is large, I want to buy a lot of insurance despite its high price, and so the deadweight loss is small because having the insurer bear my risk is allocatively efficient.</p>
<p>On the other hand, my insurer sets a lower loading factor when the probability of loss rises.
This is because the mixed partial derivative
<code>$$\parfrac{^2c^*}{\lambda\partial p}=-\frac{\lambda}{\alpha(1-\lambda p)^2}$$</code>
is strictly negative, which means that my demand is more sensitive to price changes when <code>\(p\)</code> is high.
My insurer responds to this sensitivity by forfeiting some of its monopoly power, moving us closer to the actuarially fair equilibrium and lowering the deadweight loss.</p>
<h2 id="limitations">Limitations</h2>
<p>One issue with my analysis is the assumption that I have exponential utility, which implies that my tolerance for, and demand for insurance against, additive risks does not depend on how rich I am.
Under this assumption, I am equally willing to pay for insurance to avoid a $10 loss when I have $10 as I am when I have $10 million, which seems implausible.
I could instead assume that I have <a href="https://en.wikipedia.org/wiki/Isoelastic_utility">isoelastic utility</a>
<code>$$u(w)\equiv\frac{w^{1-r}-1}{1-r}$$</code>
for some <code>\(r>0\)</code>, which would imply that my willingness to pay for insurance falls as I become richer.
However, replacing exponential with isoelastic utility in the plots above delivers qualitatively identical patterns.</p>
<p>Another issue is the supposition that the insurer knows my demand schedule.
In reality, my insurer would have imperfect information about my utility function and the parameters of my choice environment, and so would not know my inverse demand function <code>\(C(\lambda)\)</code>.
But they could estimate <code>\(C(\lambda)\)</code> by, for example, asking how much insurance I would buy at a range of prices.
They would have to be clever to prevent me from over-reporting my price-sensitivity in an attempt to get cheaper coverage, but I’m sure real-world insurers have solved this problem (at least approximately) given their financial incentives.</p>
Dyadic dependence
https://bldavies.com/blog/dyadic-dependence/
Wed, 10 Feb 2021 00:00:00 +0000https://bldavies.com/blog/dyadic-dependence/<p>Let <code>\([n]\equiv\{1,2,\ldots,n\}\)</code> be a set of individuals.
Suppose I have data <code>\(\{(y_{ij},x_{ij}):i,j\in[n]\ \text{with}\ i<j\}\)</code> on pairs in <code>\([n]\)</code> generated by the process
<code>$$\renewcommand{\epsilon}{\varepsilon} y_{ij}=x_{ij}\beta+\epsilon_{ij},$$</code>
where <code>\(x_{ij}\)</code> is a row vector of pair <code>\(\{i,j\}\)</code>'s characteristics, <code>\(\beta\)</code> is a vector of coefficients to be estimated, and <code>\(\epsilon_{ij}\)</code> is a random error term with zero mean and zero correlation with the <code>\(x_{ij}\)</code>.
For example, <code>\([n]\)</code> could be the nodes in a network, <code>\(x_{ij}\)</code> the dimensions along which nodes <code>\(i\)</code> and <code>\(j\)</code> interact, and <code>\(y_{ij}\)</code> the outcome of such interaction.</p>
<p>We can rewrite the data-generating process (DGP) in matrix form as
<code>$$y=X\beta+\epsilon,$$</code>
where <code>\(y\)</code> is the vector of outcomes, <code>\(X\)</code> is the design matrix, and <code>\(\epsilon\)</code> is the vector of errors.
Here <code>\(X\)</code> has
<code>$$N\equiv\frac{n(n-1)}{2}$$</code>
rows, each corresponding to a(n unordered) pair of individuals in <code>\([n]\)</code>.
Since the <code>\(x_{ij}\)</code> and <code>\(\epsilon_{ij}\)</code> are uncorrelated, the ordinary least squares estimator
<code>$$\hat\beta=(X^T\!X)^{-1}X^T\!y$$</code>
of <code>\(\beta\)</code> is unbiased.
However, <code>\(\hat\beta\)</code> may not be <a href="https://en.wikipedia.org/wiki/Gauss%E2%80%93Markov_theorem">efficient</a> because the errors <code>\(\epsilon_{ij}\)</code> may be correlated.
For example, if
<code>$$\epsilon_{ij}=u_i+u_j+v_{ij}$$</code>
with <code>\(u_i\)</code>, <code>\(u_j\)</code>, and <code>\(v_{ij}\)</code> independent then
<code>$$\DeclareMathOperator{\Cov}{Cov} \DeclareMathOperator{\Var}{Var} \Cov(\epsilon_{ij},\epsilon_{jk})=\Var(u_j).$$</code>
Intuitively, the pairs <code>\(\{i,j\}\)</code> and <code>\(\{j,k\}\)</code> are linked through individual <code>\(j\)</code>, and so any errors specific to that individual affect the errors for both pairs.
Consequently, the <a href="https://en.wikipedia.org/wiki/Homoscedasticity">homoskedastic</a> estimator
<code>$$\widehat{\Var}_{\text{Hom.}}(\hat\beta)=\hat\sigma^2(X^T\!X)^{-1}$$</code>
with
<code>$$\hat\sigma^2=\frac{1}{N}\sum_{ij}\hat\epsilon_{ij}^2$$</code>
and
<code>$$\hat\epsilon_{ij}=y_{ij}-x_{ij}\hat\beta$$</code>
will typically under-estimate the variance in <code>\(\hat\beta\)</code> by failing to account for linked pairs having dependent errors.</p>
<p>So, how can we account for such dependence?
Consider the “sandwich” form
<code>$$\Var(\hat\beta)=BMB$$</code>
of the (co)variance matrix for <code>\(\hat\beta\)</code>, where <code>\(B=(X^T\!X)^{-1}\)</code> is the “bread” matrix and <code>\(M=X^T\!VX\)</code> is the “meat” matrix with <code>\(V=\Var(\epsilon)\)</code> the error (co)variance matrix.
We need to estimate <code>\(M\)</code> because we don’t observe the <code>\(\epsilon_{ij}\)</code>.
Indexing pairs by <code>\(p\)</code>, the homoskedastic estimator defined above uses
<code>$$\begin{align} \hat{M}_{\text{Hom.}} &= \hat\sigma^2X^T\!X \\ &= \hat\sigma^2\sum_{p=1}^Nx_p^T\!x_p, \end{align}$$</code>
which assumes all errors have equal variance.
In contrast, <a href="https://doi.org/10.2307/1912934">White (1980)</a> suggests using
<code>$$\begin{align} \hat{M}_{\text{White}} &= X^T\!\mathrm{diag}\left(\hat\epsilon_p^2\right)X \\ &= \sum_{p=1}^N\hat\epsilon_p^2x_p^T\!x_p, \end{align}$$</code>
which allows for unequal error variances (<a href="https://en.wikipedia.org/wiki/Heteroscedasticity">heteroskedasticity</a>).
But neither <code>\(\hat{M}_{\text{Hom.}}\)</code> nor <code>\(\hat{M}_{\text{White}}\)</code> allow for dyadic dependence among the errors.
To that end, <a href="https://doi.org/10.1093/pan/mpv018">Aronow et al. (2017)</a> suggest augmenting White’s estimator via
<code>$$\begin{align} \hat{M}_{\text{Aronow}} &= \hat{M}_{\text{White}}+\sum_{p=1}^N\sum_{q\in\mathcal{D}(p)}\hat\epsilon_p\hat\epsilon_qx_p^T\!x_q, \end{align}$$</code>
where <code>\(\mathcal{D}(p)\)</code> is the set of pairs <code>\(q\not=p\)</code> linked to <code>\(p\)</code> by a shared individual.
We can express <code>\(\hat{M}_{\text{Aronow}}\)</code> in matrix form as
<code>$$\hat{M}_{\text{Aronow}}=X^T\!\left(D\odot\hat\epsilon\hat\epsilon^T\!\right)X,$$</code>
where <code>\(D=(d_{pq})\)</code> is the dyadic dependence matrix with
<code>$$d_{pq}=\begin{cases} 1 & \text{if pairs}\ p\ \text{and}\ q\ \text{are linked}\\ 0 & \text{otherwise}, \end{cases}$$</code>
and where <code>\(\odot\)</code> denotes element-wise multiplication.
Aronow et al. show that, under mild conditions, <code>\(B\hat{M}_{\text{Aronow}}B\)</code> is a consistent estimator for <code>\(\Var(\hat\beta)\)</code> when the data exhibit dyadic dependence.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup></p>
<p>To see Aronow et al.‘s estimator in action, suppose the DGP is given by the system
<code>$$\begin{align} y_{ij} &= \beta x_{ij}+\epsilon_{ij} \\ x_{ij} &= z_i+z_j \\ \epsilon_{ij} &= u_i+u_j+v_{ij}, \end{align}$$</code>
where <code>\(z_i\)</code>, <code>\(z_j\)</code>, <code>\(u_i\)</code>, <code>\(u_j\)</code> and <code>\(v_{ij}\)</code> are iid standard normal, and <code>\(\beta=1\)</code> is the (scalar) coefficient to be estimated.
Both the <code>\(x_{ij}\)</code> and the <code>\(\epsilon_{ij}\)</code> exhibit dyadic dependence, so we expect the homoskedastic and White estimators to under-estimate the true variance in <code>\(\hat\beta\)</code>.
Indeed, the box plots below show that Aronow et al.‘s estimator is less biased than the homoskedastic and White estimators, and gets more accurate as the number of individuals <code>\(n\)</code> grows.</p>
<p><img src="figures/plot-1.svg" alt=""></p>
<p>Aronow et al.‘s estimator can also be applied to generalized linear models.
For example, suppose
<code>$$y_{ij}=\begin{cases} 1 & \text{if nodes}\ i\ \text{and}\ j\ \text{are adjacent} \\ 0 & \text{otherwise} \end{cases}$$</code>
is an indicator for the event in which nodes <code>\(i\)</code> and <code>\(j\)</code> are adjacent in a network.
We can model the link formation process as
<code>$$\Pr(y_{ij}=1)=\Lambda^{-1}(x_{ij}\beta+\epsilon_{ij}),$$</code>
where <code>\(\Lambda(x)\equiv\log(x/(1-x))\)</code> is the logit link function.
The <a href="https://en.wikipedia.org/wiki/Logistic_regression">logistic regression</a> estimate <code>\(\hat\beta\)</code> of <code>\(\beta\)</code> reveals how the observable characteristics <code>\(x_{ij}\)</code> of nodes <code>\(i\)</code> and <code>\(j\)</code> determine their probability of being adjacent.
We can estimate the variance of <code>\(\hat\beta\)</code> consistently by letting <code>\(\hat{P}_{ij}=\Lambda^{-1}(x_{ij}\hat\beta)\)</code> be the predicted probability for pair <code>\(\{i,j\}\)</code>, replacing the bread matrix <code>\(B=(X^T\!X)^{-1}\)</code> with
<code>$$\hat{B}=\left(X^T\mathrm{diag}\left(\hat{P}_{ij}\left(1-\hat{P}_{ij}\right)\right)X\right)^{-1},$$</code>
and computing <code>\(\hat{B}\hat{M}_{\text{Aronow}}\hat{B}\)</code>.
My co-authors and I use this approach in “<a href="https://bldavies.com/blog/research-funding-collaboration">Research Funding and Collaboration</a>:” we estimate how grant proposal outcomes determine the probability with which pairs of researchers co-author, and we compare <code>\(\hat\sigma^2\hat{B}\)</code> and <code>\(\hat{B}\hat{M}_{\text{Aronow}}\hat{B}\)</code> to show that our inferences are robust to dyadic dependence.</p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p><a href="https://doi.org/10.1016/j.jdeveco.2006.05.005">Fafchamps and Gubert (2007)</a> describe a similar variance estimator to Aronow et al. but do not establish its consistency. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Assortative mixing
https://bldavies.com/blog/assortative-mixing/
Tue, 02 Feb 2021 00:00:00 +0000https://bldavies.com/blog/assortative-mixing/<p>Let <code>\(N\)</code> be a network with <code>\(n\)</code> nodes, each of which has a “type” belonging to some set <code>\(T\)</code>.
We say that <code>\(N\)</code> is “<a href="https://en.wikipedia.org/wiki/Assortative_mixing">assortatively mixed</a>” if nodes tend to have the same types as their neighbors.
For example, if <code>\(N\)</code> is a social network and <code>\(T\)</code> is a set of interests, then assortative mixing could arise because friends tend to share interests.</p>
<p>How can we measure the extent of assortative mixing in <code>\(N\)</code>?
<a href="https://doi.org/10.1103/PhysRevE.67.026126">Newman (2003)</a> suggests the “assortativity coefficient”
<code>$$r=\frac{\sum_{t\in T}x_{tt}-\sum_{t\in T}y_t^2}{1-\sum_{t\in T}y_t^2},$$</code>
where <code>\(x_{st}\)</code> is the proportion of edges joining nodes of type <code>\(s\)</code> to nodes of type <code>\(t\)</code>, and where
<code>$$y_t=\sum_{s\in T}x_{st}$$</code>
is the proportion of edges incident with nodes of type <code>\(t\)</code>.
The coefficient <code>\(r\)</code> varies between -1 and 1, and takes larger values when <code>\(N\)</code> is more assortatively mixed.
We say that <code>\(N\)</code> is “positively sorted” if <code>\(r>0\)</code> and “negatively sorted” if <code>\(r<0\)</code>.</p>
<p>We can interpret <code>\(r\)</code> by thinking about the “mixing matrix” <code>\(X=(x_{st})\)</code>.
The numerator of <code>\(r\)</code> equals the sum of diagonal entries of <code>\(X\)</code> minus what that sum would be if the distributions of entries across rows and columns were independent.
The denominator of <code>\(r\)</code> is a normalizing constant ensuring <code>\(\lvert r\rvert\le1\)</code>.
Thus <code>\(r\)</code> indexes the frequency of within-type edges in <code>\(N\)</code> relative to the frequency we would expect in a random network with the same proportion of edges incident with each type.</p>
<p>As an example, suppose <code>\(N\)</code> is a realization of the <a href="https://bldavies.com/blog/generating-random-graphs-communities/">planted partition model</a> with <code>\(n_1\)</code> nodes of type 1, <code>\(n_2=n-n_1\)</code> nodes of type 2, and some proportion
<code>$$p_{st}=\begin{cases} p & \text{if}\ s=t \\ q & \text{otherwise} \end{cases}$$</code>
of edges joining nodes of type <code>\(s\)</code> to nodes of type <code>\(t\)</code>.
Then <code>\(N\)</code> has assortativity coefficient
<code>$$r=\frac{p^2(n_1-1)(n_2-1)-q^2n_1n_2}{p^2(n_1-1)(n_2-1)+pq(n_1(n_1-1)+n_2(n_2-1))+q^2n_1n_2},$$</code>
which equals -1 if <code>\(p=0\)</code> and <code>\(q>0\)</code> (i.e., there are no within-type edges), and equals 1 if <code>\(p>0\)</code> and <code>\(q=0\)</code> (i.e., there are no between-type edges).
If <code>\(p=q\)</code> then
<code>$$r=-\frac{1}{n-1},$$</code>
which converges to zero from below as <code>\(n\)</code> becomes large.
Intuitively, if <code>\(p=q\)</code> then within-type and between-type edges occur at the same rate, but the network is slightly negatively sorted because there are slightly fewer potential within-type edges than potential between-type edges.</p>
<p>If <code>\(n_1=n_2\)</code> then
<code>$$r=\frac{p^2(n-2)-q^2n}{p^2(n-2)+q^2n},$$</code>
which converges to <code>\((p^2-q^2)/(p^2+q^2)\)</code> as <code>\(n\)</code> becomes large.
The figure below demonstrates this case with <code>\(n_1=n_2=25\)</code>.
The network on the left has edge frequencies <code>\((p,q)=(0.15,0.02)\)</code> and assortativity coefficient <code>\(r=0.75\)</code>; the network on the right has edge frequencies <code>\((p,q)=(0.02,0.15)\)</code> and assortativity coefficient <code>\(r=-0.79\)</code>.
Both networks are drawn so that adjacent nodes are closer together.
Nodes in the positively sorted network tend to have neighbors with the same type, while nodes in the negatively sorted network tend to have neighbors with a different type.</p>
<p><img src="figures/example-1.svg" alt=""></p>
<p>The assortativity coefficient <code>\(r\)</code> can be used when <code>\(T\)</code> is a set of categorical types.
In contrast, if <code>\(T\)</code> is set of scalar quantities then we can measure the extent of assortative mixing via the Pearson correlation coefficient
<code>$$\DeclareMathOperator{\E}{E} \DeclareMathOperator{\Var}{Var} \DeclareMathOperator{\Cov}{Cov} \rho=\frac{\Cov(t_i,t_j)}{\sqrt{\Var(t_i)\Var(t_j)}},$$</code>
where <code>\(t_i\in T\)</code> and <code>\(t_j\in T\)</code> are the “types” of nodes <code>\(i\)</code> and <code>\(j\)</code>, and where (co)variances are computed with respect to the frequency at which nodes of type <code>\(t_i\)</code> and <code>\(t_j\)</code> are adjacent in the network.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>
To see how this works, let <code>\(A=(a_{ij})\)</code> be the <code>\(n\times n\)</code> adjacency matrix for <code>\(N\)</code> and let <code>\(W=(w_{ij})\)</code> be the <code>\(n\times n\)</code> “weighting matrix” with entries
<code>$$w_{ij}=\frac{a_{ij}}{\lvert\rvert A\rvert\rvert},$$</code>
where <code>\(\lvert\rvert A\rvert\rvert\)</code> denotes the sum of elements in <code>\(A\)</code>.
Then the vector <code>\(t=(t_1,t_2,\ldots,t_n)\)</code> of node types has mean
<code>$$\E[t]=s^Tt,$$</code>
where <code>\(s=(s_1,s_2,\ldots,s_n)\)</code> is the vector of row sums
<code>$$s_i=\sum_{j=1}^nw_{ij}.$$</code>
Intuitively, <code>\(s\)</code> describes the probability mass function for the (marginal) distribution of node types.
Treating <code>\(t_i\)</code> and <code>\(t_j\)</code> as draws from this distribution, we have
<code>$$\begin{align*} \Cov(t_i,t_j) &= \E[t_it_j]-\E[t_i]\E[t_j] \\ &= \sum_{i=1}^n\sum_{j=1}^nw_{ij}t_it_j-(s^Tt)(s^Tt) \\ &= t^TWt-(s^Tt)^2 \end{align*}$$</code>
and similarly
<code>$$\begin{align*} \Var(t_i) &= \E[t_i^2]-\E[t_i]^2 \\ &= \sum_{i=1}^ns_it_i^2-(s^Tt)^2 \\ &= t^TSt-(s^Tt)^2, \end{align*}$$</code>
where <code>\(S\)</code> is the <code>\(n\times n\)</code> matrix with principal diagonal equal to <code>\(s\)</code> and off-diagonal entries equal to zero.
Then
<code>$$\rho=\frac{t^TWt-(s^Tt)^2}{t^TSt-(s^Tt)^2}.$$</code>
For example, if the nodes in <code>\(N\)</code> are arranged such that
<code>$$a_{ij}=\begin{cases}1 & \text{if}\ t_i=t_j \\ 0 & \text{otherwise} \end{cases}$$</code>
then
<code>$$\begin{align*} t^TWt &= \sum_{i=1}^n\sum_{j=1}^nw_{ij}t_it_j \\ &= \sum_{i=1}^nt_i^2\sum_{j=1}^nw_{ij} \\ &= \sum_{i=1}^nt_i^2s_i \\ &= t^TSt \end{align*}$$</code>
and so <code>\(\rho=1\)</code>—that is, if all adjacent nodes have the same scalar type then the coefficient <code>\(\rho\)</code> obtains its maximum value of unity.</p>
<p>One common use of the correlation coefficient <code>\(\rho\)</code> is to measure assortativity with respect to nodes’ degrees (see, e.g., <a href="https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.89.208701">Newman, 2002</a>).
For example, the left-hand network in the figure above has <code>\(\rho=0.03\)</code>: although nodes are sorted strongly by color, they are approximately unsorted by degree because the planted partition model from which the network is generated has no mechanism for connecting high-degree nodes.
Performing a <a href="https://bldavies.com/blog/degree-preserving-randomisation/">degree-preserving randomization</a> of the network changes its assortativity with respect to nodes’ degrees by changing the joint distribution of those degrees across node pairs:</p>
<p><img src="figures/dpr-1.svg" alt=""></p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>Numerical experimentation suggests <code>\(r=\rho\)</code> whenever <code>\(\lvert T\rvert=2\)</code>, which I prove <a href="https://bldavies.com/blog/assortativity-correlation-coefficients/">here</a>. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Ordinary and total least squares
https://bldavies.com/blog/ordinary-total-least-squares/
Mon, 11 Jan 2021 00:00:00 +0000https://bldavies.com/blog/ordinary-total-least-squares/<p>Suppose <code>\(X\)</code> and <code>\(Y\)</code> are random variables with
<code>$$\DeclareMathOperator{\E}{E} \DeclareMathOperator{\Cov}{Cov} \DeclareMathOperator{\Var}{Var} \newcommand{\abs}[1]{\lvert#1\rvert} Y=\beta X+u,$$</code>
where <code>\(u\)</code> has zero mean and zero correlation with <code>\(X\)</code>.
The coefficient <code>\(\beta\)</code> can be estimated by collecting data <code>\((Y_i,X_i)_{i=1}^n\)</code> and regressing the <code>\(Y_i\)</code> on the <code>\(X_i\)</code>.
Now suppose our data collection procedure is flawed: instead of observing <code>\(X_i\)</code>, we observe <code>\(Z_i=X_i+v_i\)</code>, where the <code>\(v_i\)</code> are iid with zero mean and zero correlation with the <code>\(X_i\)</code>.
Then the ordinary least squares (OLS) estimate <code>\(\hat\beta_{\text{OLS}}\)</code> of <code>\(\beta\)</code> obtained by regressing the <code>\(Y_i\)</code> on the <code>\(Z_i\)</code> suffers from <a href="https://en.wikipedia.org/wiki/Regression_dilution">attenuation bias</a>:
<code>$$\begin{align*} \DeclareMathOperator*{\plim}{plim} \plim_{n\to\infty}\hat\beta_{\text{OLS}} &=\frac{\Cov(Y,Z)}{\Var(Z)} \\ &=\frac{\Cov(\beta X+u,X+v)}{\Var(X+v)} \\ &= \frac{\beta\Var(X)}{\Var(X)+\Var(v)} \\ &= \frac{\beta}{1+\Var(v)/\Var(X)} \end{align*}$$</code>
and so <code>\(\abs{\hat\beta_{\text{OLS}}}<\abs{\beta}\)</code> asympotically whenever <code>\(\Var(v)>0\)</code>.
Intuitively, the measurement errors <code>\(v_i\)</code> spread out the independent variable, flattening the fitted regression line.</p>
<p>One way to reduce attenuation bias is to replace OLS with total least squares (TLS), which accounts for noise in the dependent <em>and</em> independent variables.
As a demonstration, the chart below compares the OLS and TLS lines of best fit through some randomly generated data <code>\((Y_i,Z_i)_{i=1}^n\)</code> with <code>\(\beta=1\)</code>.
The OLS estimate <code>\(\hat\beta_{\text{OLS}}=0.43\)</code> minimizes the sum of squared <em>vertical</em> deviations of the data from the fitted line.
In contrast, the TLS estimate <code>\(\hat\beta_{\text{TLS}}=0.95\)</code> minimizes the sum of squared <em>perpendicular</em> deviations of the data from the fitted line.
For these data, the TLS estimate is unbiased because <code>\(u\)</code> and <code>\(v\)</code> have the same variance.</p>
<p><img src="figures/example-1.svg" alt=""></p>
<p>However, if <code>\(u\)</code> and <code>\(v\)</code> have different variances then the TLS estimate of <code>\(\beta\)</code> is biased.
I demonstrate this phenomenon in the chart below, which compares the OLS and TLS estimates of <code>\(\beta=1\)</code> for varying <code>\(\Var(u)\)</code> and <code>\(\Var(v)\)</code> when <code>\(X\)</code> is standard normal.
I plot the bias <code>\(\E[\hat\beta-\beta]\)</code> and mean squared error <code>\(\E[(\hat\beta-\beta)^2]\)</code> of each estimate <code>\(\hat\beta\in\{\hat\beta_{\text{OLS}},\hat\beta_{\text{TLS}}\}\)</code>, obtained by simulating the data-generating process 100 times for each <code>\((\Var(u),\Var(v))\)</code> pair.</p>
<p><img src="figures/comparison-1.svg" alt=""></p>
<p>If <code>\(\Var(u)>\Var(v)\)</code> then the TLS estimate <code>\(\hat\beta_{\text{TLS}}\)</code> is biased upward because the data are relatively stretched vertically; if <code>\(\Var(u)<\Var(v)\)</code> then <code>\(\hat\beta_{\text{TLS}}\)</code> is biased downward because the data are relatively stretched horizontally.
The OLS estimate is biased downward whenever <code>\(\Var(u)>0\)</code> due to attenuation.
The TLS estimate is less biased and has smaller mean squared error than the OLS estimate when <code>\(\Var(u)<\Var(v)\)</code>, suggesting that TLS generates “better” estimates than OLS when the measurement errors <code>\(v_i\)</code> are relatively large.</p>
<p>One problem with TLS estimates is that they depend on the units in which variables are measured.
For example, suppose <code>\(Y_i\)</code> is person <code>\(i\)</code>'s weight and <code>\(Z_i\)</code> is their height.
If I measure <code>\(Y_i\)</code> in pounds, generate a TLS estimate <code>\(\hat\beta_{\text{TLS}}\)</code>, use this estimate to predict the weight in pounds of someone six feet tall, and then convert my prediction to kilograms, I get a different result than if I had measured <code>\(Y_i\)</code> in kilograms initially.
This unit-dependence arises because rescaling the dependent variable affects each perpendicular deviation differently.</p>
<p>In contrast, OLS-based predictions do not depend on the units in which I measure <code>\(Y_i\)</code>.
Rescaling the dependent variable multiplies each vertical deviation by the same constant, leaving the squared deviation-minimizing coefficient unchanged.</p>
Auctioning vaccines
https://bldavies.com/blog/auctioning-vaccines/
Thu, 17 Dec 2020 00:00:00 +0000https://bldavies.com/blog/auctioning-vaccines/<p><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3746231">Pancs (2020)</a> proposes an auction for vaccines in which people can bid on others’ behalf.
This format allows people to internalize the externalities they enjoy from their peers being vaccinated.</p>
<p>For example, suppose there are two vaccines to be allocated among agents A–H, who are connected socially via the network shown below.</p>
<p><img src="figures/network-1.svg" alt=""></p>
<p>Everyone submits bids totaling $60, spread evenly among themselves and their peers.
For example, agent A bids $30 towards vaccinating themself and agent B, while agent B bids $15 towards vaccinating themself and agents A, C, and D.
Intuitively, agent A values vaccinating B highly because it protects A fully from viruses transmitted among agents C–H.
In contrast, B has more peers and so values vaccinating any one of those peers less because it doesn’t protect B fully from the rest of the network.</p>
<p>The “aggregate bid” for each agent equals the sum of bids submitted towards that agent’s vaccination.
The agents with the highest aggregate bids receive the vaccines.
In this example, agents B and F receive the vaccine, with aggregate bids equal to $94 and $87.</p>
<p>Each agent receives surplus equal to their subjective valuation of the vaccine allocation minus their payment towards that allocation’s provision.
This payment equals the increase in aggregate surplus that other agents would receive if the agent’s bids were ignored.
Thus, the vaccine auction is a type of <a href="https://en.wikipedia.org/wiki/Vickrey%E2%80%93Clarke%E2%80%93Groves_auction">Vickrey-Clarke-Groves (VCG) auction</a> in which each agent pays the harm they inflict on other agents.
Consequently, the vaccine auction inherits the properties of VCG auctions; in particular, bids equal subjective valuations.
This property makes it easy to compute pre-payment surpluses: simply sum each agent’s bids towards vaccinated agents.</p>
<p>The table below presents the aggregate bid for, payment made by, and surplus delivered to each agent under the optimal vaccine allocation.
Agents B and F don’t have to pay for the vaccines they receive because others are willing to pay on their behalf.
Agent A pays $15 because their bid towards vaccinating B shifts the optimal allocation away from E, which lowers F’s surplus by $15.
Likewise, agents G and H pay because their preference to vaccinate F, rather than E, makes B–D worse off.</p>
<table>
<thead>
<tr>
<th align="center">Agent</th>
<th align="center">Aggregate bid ($)</th>
<th align="center">Payment ($)</th>
<th align="center">Surplus ($)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center">A</td>
<td align="center">42</td>
<td align="center">15</td>
<td align="center">15</td>
</tr>
<tr>
<td align="center">B</td>
<td align="center">94</td>
<td align="center">0</td>
<td align="center">12</td>
</tr>
<tr>
<td align="center">C</td>
<td align="center">44</td>
<td align="center">0</td>
<td align="center">20</td>
</tr>
<tr>
<td align="center">D</td>
<td align="center">44</td>
<td align="center">0</td>
<td align="center">20</td>
</tr>
<tr>
<td align="center">E</td>
<td align="center">79</td>
<td align="center">0</td>
<td align="center">24</td>
</tr>
<tr>
<td align="center">F</td>
<td align="center">87</td>
<td align="center">0</td>
<td align="center">15</td>
</tr>
<tr>
<td align="center">G</td>
<td align="center">45</td>
<td align="center">22</td>
<td align="center">8</td>
</tr>
<tr>
<td align="center">H</td>
<td align="center">45</td>
<td align="center">22</td>
<td align="center">8</td>
</tr>
</tbody>
</table>
<p>This example departs from reality in two important ways.
First, I assume each agent’s bids sum to a constant ($60).
This assumption is obviously unrealistic: wealth inequality means some people can afford to submit higher bids than others, which may lead to inequitible vaccine allocations.
Moreover, people may vary in their willingness to pay for vaccines independently of the variation in their wealths.</p>
<p>Second, I assume every agent wants to be vaccinated.
This common desire may not hold in reality: some people may prefer not to be vaccinated because they fear potential side-effects.
Such people may refuse to participate in the auction, reducing social welfare by preventing some externalities from being internalized.</p>
Gift exchange mechanisms
https://bldavies.com/blog/gift-exchange-mechanisms/
Sun, 13 Dec 2020 00:00:00 +0000https://bldavies.com/blog/gift-exchange-mechanisms/<p>Last December <a href="https://bldavies.com/blog/white-elephant-gift-exchanges/">I compared strategies for playing white elephant</a>, a game in which people take turns either unwrapping a gift or stealing a previously unwrapped gift.
It turned out that players’ best strategy was to be “greedy” by stealing the most subjectively valuable unwrapped gift.
Intuitively, this strategy helps players obtain the gift they want most, provided no other players also want that gift and steal it later in the game.</p>
<p>White elephant exchanges are a fun, but not necessarily optimal, way to match people with gifts.
Another way is to use the <a href="https://en.wikipedia.org/wiki/Top_trading_cycle">top trading cycle</a> (TTC) algorithm:</p>
<ol>
<li>Give everyone a random unwrapped gift.</li>
<li>Ask everyone to point at the most subjectively valuable gift (which may be their own).</li>
<li>If there is a closed cycle of people pointing at each others’ gifts, give everyone in that cycle the gift at which they’re pointing, and remove those people and gifts from consideration.</li>
<li>If there are no gifts remaining then stop. Otherwise, return to step 2.</li>
</ol>
<p>The allocation delivered by this algorithm has several desirable properties.
First, it is <a href="https://en.wikipedia.org/wiki/Pareto_efficiency">Pareto efficient</a>: every cycle identifies a mutually beneficial exchange, and the algorithm stops when no such exchanges remain.
Second, it is <a href="https://en.wikipedia.org/wiki/Strategyproofness">strategy-proof</a>: people cannot get better gifts by lying about their preferences (see <a href="https://doi.org/10.1016/0165-1765%2882%2990003-9">Roth, 1982</a>).
Third, it is <a href="https://en.wikipedia.org/wiki/Core_%28game_theory%29">core-stable</a>: no group of people can cooperate to improve their allocations, for otherwise they would have formed a cycle before the algorithm stopped.</p>
<p>However, the TTC algorithm may not deliver the allocation that maximizes the sum of gifts’ subjective values.
This allocation corresponds to a <a href="https://en.wikipedia.org/wiki/Maximum_weight_matching">maximum-weight matching</a> in the bipartite graph connecting people to gifts, with each edge’s weight equal to the incident player’s subjective value of the incident gift.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup></p>
<p>The chart below compares the mean subjective value of the gifts allocated using a game of white elephant, using the TTC algorithm, and by finding a maximum-weight matching.
I compute these allocations as follows.
First, I define person <code>\(i\)</code>'s subjective value of gift <code>\(j\)</code> as
<code>$$V_{ij}=\rho X_j+(1-\rho)Y_{ij},$$</code>
where <code>\(X_i\)</code> and <code>\(Y_{ij}\)</code> are iid uniformly distributed on the unit interval.
The parameter <code>\(\rho\)</code> determines the correlation of gifts’ subjective values across people: if <code>\(\rho=0\)</code> then everyone’s valuations are independent, whereas if <code>\(\rho=1\)</code> then everyone has the same valuation of each gift.
For a range of <code>\(\rho\)</code> values, I simulate 100 valuation sets <code>\(\{V_{ij}:i,j\in\{1,2,\ldots,30\}\}\)</code>, and apply each gift exchange mechanism to each set.
In the white elephant games, I assume all players adopt the greedy strategy described above unless the best unwrapped gift has subjective value less than <code>\(\mathrm{E}[V_{ij}]=0.5\)</code>, in which case players unwrap a new gift.</p>
<p><img src="figures/means-1.svg" alt=""></p>
<p>All three gift exchange mechanisms get worse as gifts’ subjective values become more correlated.
Intuitively, as the correlation increases, there are fewer Pareto-improving trades and so people get stuck with their random endowments.<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>
The allocations delivered via white elephant games and the TTC algorithm have similar allocative efficiencies, even though white elephant players can’t assign subjective values to gifts until they are unwrapped.</p>
<p>Yet white elephant games are much more popular at Christmas parties than the TTC algorithm.
One explanation could be that the algorithm tends to reveal a lot of information about peoples’ preferences and, in particular, may make people more upset about contributing a gift no-one wants.
I justify this claim in the following chart, which plots the number of times someone rejects each gift for another in my simulated exchanges.
For example, I add one to gift A’s rejection count if</p>
<ol>
<li>a white elephant player could steal gift A but instead steals gift B, or</li>
<li>I’m running the TTC algorithm and someone could point at gift A but instead points at gift B.</li>
</ol>
<p>Intuitively, these rejection events reveal that gift A has subjectively lower value than other gifts, and the more often this happens the more likely is the person who contributed gift A to feel bad about their contribution.</p>
<p><img src="figures/rejections-1.svg" alt=""></p>
<p>Most Christmas parties set a target amount to be spent on each gift, so—to the extent that cost correlates positively with value—the empirically relevant region of the chart is where the correlation of subjective values is high.
In this region, running the TTC algorithm tends to generate many more rejection events than running a game of white elephant.
Intuitively, if the correlation of subjective values is high then people will tend to all point at the same gifts, there will be fewer cycles, more iterations will be required before the TTC algorithm stops, and hence the algorithm will force people to reveal more about their preferences as the market slowly clears.
On the other hand, the unobservability of wrapped gifts’ subjective values means that white elephant players have fewer opportunities to reveal their preferences, regardless of whether those preferences are shared by other players.</p>
<hr>
<p><em>Thanks to <a href="https://adhami.people.stanford.edu">Mohamad Adhami</a>, <a href="https://nickcao.com/">Nick Cao</a>, and <a href="https://www.spantoja.com">Spencer Pantoja</a> for commenting on a draft version of this post.</em></p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>The maximum-weight matching is hard to find in practice because it requires complete information about peoples’ preferences. In contrast, white elephant games and the TTC algorithm elicit peoples’ preferences by asking them to choose explicitly which gifts they want. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>In white elephant games, the randomness comes from the order in which people take their turns choosing whether to unwrap or steal. <a href="#fnref:2" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Aggregating preferences: Bird of the Year edition
https://bldavies.com/blog/aggregating-preferences-bird-year-edition/
Sun, 29 Nov 2020 00:00:00 +0000https://bldavies.com/blog/aggregating-preferences-bird-year-edition/<p>Earlier this month the <a href="https://www.birdoftheyear.org.nz/kakapo">kākāpō</a> was elected <a href="https://www.birdoftheyear.org.nz">Bird of the Year</a> for 2020.
The news prompted me to review the results of <a href="https://bldavies.com/blog/birds-voting-russian-interference/">last year’s election</a>, in which the kākāpō lost narrowly to the yellow-eyed penguin.
In particular, I wanted to determine whether the 2019 results were sensitive to the method used to aggregate voters’ preferences.
This post summarises my findings: different methods deliver (slightly) different outcomes, and at least one method would have crowned the kākāpō.</p>
<p>Bird of the Year elections run as follows.
Each voter selects up to five birds, ranks their selections in order of preference, and submits their ranking on the election website.
These submissions determine the winning bird via the <a href="https://en.wikipedia.org/wiki/Instant-runoff_voting">instant-runoff</a> (IR) method:</p>
<ol>
<li>Count the ballots on which each bird is ranked first.</li>
<li>If one bird is ranked first on a majority of ballots then elect it.
Otherwise, eliminate the bird ranked first on the fewest ballots and return to step 1.</li>
</ol>
<p>Using the IR method, rather than a <a href="https://en.wikipedia.org/wiki/Plurality_voting">plurality vote</a> (in which the bird listed first on the most ballots wins), mitigates <a href="https://en.wikipedia.org/wiki/Vote_splitting">vote-splitting</a> because voters can list multiple birds on their ballots.
However, the IR method violates the <a href="https://en.wikipedia.org/wiki/Condorcet_criterion">Condorcet criterion</a>: a bird may lose the election even if it would beat every other bird in a head-to-head plurality vote.
One way to satisfy this criterion is to use <a href="https://en.wikipedia.org/wiki/Copeland%27s_method">Copeland’s method</a>, which ranks birds by the number of pairwise plurality votes they win minus the number of such votes they lose.</p>
<p>The IR method and Copeland’s method both rely on noiseless within-ballot rankings.
I suspect this property does not hold for Bird of the Year elections.
After selecting up to five birds, voters are asked to rearrange their selections from most to least preferred before submitting their ballots.
It seems likely that this rearrangement does not occur, either because voters can’t be bothered or because they are approximately indifferent among their selections.
In either case, voters’ preferences might be better aggregated using an <a href="https://en.wikipedia.org/wiki/Approval_voting">approval</a>-based system: each bird earns one point for each ballot appearance, and the bird with the most points wins.</p>
<p>One obvious problem with the approval-based system is that voters may approve of more than five birds, but cannot signal such approval because the “up to five” constraint binds.
On the other hand, some voters may feel obliged to list five birds on their ballots even if they approve of only four birds or fewer.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>
The most defensible way to deal with these possibilities seems (to me) to be to use a plurality vote, which assumes the minimal completeness of voters’ individual preferences by treating only their first choices as informative.<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup></p>
<p>The table below presents the top-placing birds in the 2019 election using the IR method, and those birds’ places under the other preference aggregation methods described above.
The kākāpō was actually the Condorcet winner; it would have beaten every other bird in a head-to-head plurality vote.
Nevertheless the IR method crowned the yellow-eyed penguin, as would have the approval-based system and a simple plurality vote.</p>
<table>
<thead>
<tr>
<th align="left">Bird</th>
<th align="center">IR place</th>
<th align="center">Copeland place</th>
<th align="center">Approval place</th>
<th align="center">Plurality place</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Yellow-eyed penguin</td>
<td align="center">1</td>
<td align="center">4</td>
<td align="center">1</td>
<td align="center">1</td>
</tr>
<tr>
<td align="left">Kākāpō</td>
<td align="center">2</td>
<td align="center">1</td>
<td align="center">2</td>
<td align="center">2</td>
</tr>
<tr>
<td align="left">Black Robin</td>
<td align="center">3</td>
<td align="center">2</td>
<td align="center">3</td>
<td align="center">5</td>
</tr>
<tr>
<td align="left">Banded Dotterel</td>
<td align="center">4</td>
<td align="center">8</td>
<td align="center">5</td>
<td align="center">3</td>
</tr>
<tr>
<td align="left">Fantail</td>
<td align="center">5</td>
<td align="center">12</td>
<td align="center">9</td>
<td align="center">4</td>
</tr>
<tr>
<td align="left">New Zealand Falcon</td>
<td align="center">6</td>
<td align="center">10</td>
<td align="center">10</td>
<td align="center">9</td>
</tr>
<tr>
<td align="left">Kererū</td>
<td align="center">7</td>
<td align="center">11</td>
<td align="center">11</td>
<td align="center">8</td>
</tr>
<tr>
<td align="left">Blue Duck</td>
<td align="center">8</td>
<td align="center">9</td>
<td align="center">8</td>
<td align="center">7</td>
</tr>
<tr>
<td align="left">Kea</td>
<td align="center">9</td>
<td align="center">6</td>
<td align="center">6</td>
<td align="center">10</td>
</tr>
<tr>
<td align="left">Kākā</td>
<td align="center">10</td>
<td align="center">3</td>
<td align="center">4</td>
<td align="center">11</td>
</tr>
</tbody>
</table>
<p>The figure below compares all candidate birds’ places using the IR method to their places obtained using the alternative methods.
The IR method delivers results most similar to a plurality vote and least similar to Copeland’s method, as shown by the relative deviations of points from the 45-degree line.
These patterns suggest that voters’ second through fifth choices for Bird of the Year didn’t affect the 2019 election outcome materially.</p>
<p><img src="figures/comparison-1.svg" alt=""></p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>Of the 43,460 ballots cast in last year’s election, 91.3% listed five birds, 1.4% listed four birds, 1.2% listed three birds, 0.8% listed two birds, and 5.2% listed one bird. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>Nominating a “first choice” requires only that a voter can identify at least one bird that they prefer to at least one other bird. <a href="#fnref:2" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Polarized beliefs in social networks
https://bldavies.com/blog/polarized-beliefs-social-networks/
Thu, 29 Oct 2020 00:00:00 +0000https://bldavies.com/blog/polarized-beliefs-social-networks/<p>Suppose 50 people each have four friends.
Everyone believes that some proposition—say, “corporate tax rates should be higher”—is either true or false, with equal probability and independently of everyone else.
Consequently, the social network among the 50 people is unsorted with respect to peoples’ beliefs.
However, the network’s structure changes over time, in discrete time steps, according to two rules:</p>
<ol>
<li>everyone updates their belief to match the majority within their friend group (comprised of themselves and their neighbours in the network), defaulting to their previous belief to break ties;</li>
<li>edges appear between people who hold the same belief and disappear between people who hold different beliefs, both with probability 0.01.</li>
</ol>
<p>The first rule describes a “social learning” process: people update their beliefs to match the majority among their friends.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>
The second rule describes a “peer selection” process: people choose friends who share the same beliefs.
These two processes can lead to polarized beliefs, even if there is no polarization before the processes begin.
I demonstrate this phenomenon in the figure below, which plots the beliefs and connections in a simulated network after zero, 10, 20, and 30 time steps.
The figure shows how people grow increasingly connected to others with the same belief and decreasingly connected to others with the opposing belief.</p>
<p><img src="figures/networks-1.svg" alt=""></p>
<p>The social learning and peer selection processes can lead to polarization both together and separately.
I justify this claim in the figure below.
The left-hand panel plots the network’s <a href="https://bldavies.com/blog/assortative-mixing/">assortativity coefficient</a>, which measures the overall correlation among friends’ beliefs.
This coefficient equals one when all neighbours share the same beliefs (complete polarization) and equals zero when edges are “as random.”
The right-hand panel plots the proportion of people in the network who update their belief at each time step.
Both panels present means and 95% confidence intervals across 30 simulated networks, each with randomized initial beliefs.</p>
<p><img src="figures/network-attributes-1.svg" alt=""></p>
<p>The social learning process leads to positive sorting because, by construction, people increasingly share the same beliefs as their friends.
The peer selection process leads to positive sorting because, by construction, edges increasingly connect people with common beliefs only.
The two processes work together to isolate the subnetworks of people who believe the proposition is true and false.
Interestingly, most belief updates occur very early: after about five time steps, most of the structural changes in the social network result from edge creations and deletions rather than from belief updates.</p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>See <a href="https://bldavies.com/blog/degroot-learning-social-networks/">my blog post on DeGroot learning</a> for more discussion of social learning processes. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Estimating sensitive parameters
https://bldavies.com/blog/estimating-sensitive-parameters/
Wed, 21 Oct 2020 00:00:00 +0000https://bldavies.com/blog/estimating-sensitive-parameters/<p>Suppose some proportion <code>\(\theta\)</code> of the population engages in a socially undesirable activity—say, evading taxes.
We want to estimate <code>\(\theta\)</code>, but can’t ask people directly because they may fear penalities from incriminating themselves.</p>
<p>One solution to this problem is as follows.
Choose another characteristic that people don’t mind reporting and for which we know the population prevalence—say, whether they are right-handed.
Let <code>\(\alpha\)</code> be the (assumedly known) proportion of the population with this characteristic.
Sample <code>\(n\)</code> people, and give them the following instructions:</p>
<blockquote>
<p>Flip a fair coin, but <em>don’t tell me what you get</em>.
If you get heads, answer the question “do you evade taxes?”
If you get tails, answer the question “are you right-handed?”</p>
</blockquote>
<p>The coin toss outcome’s unobservability shields respondents’ revelation of tax evasion—they could be responding “Yes” to the question of whether they are right-handed.
This shield, hopefully, elicits truthful reporting.
Then, by the <a href="https://en.wikipedia.org/wiki/Law_of_total_probability">Law of Total Probability</a>, the probability that someone responds “Yes” is
<code>$$p=\frac{\theta+\alpha}{2}.$$</code>
Let <code>\(X\)</code> be the number of people who respond “Yes.”
Then <code>\(X\)</code> is Binomially distributed with <code>\(n\)</code> trials and success rate <code>\(p\)</code>, and so has mean <code>\(\mathrm{E}[X]=np\)</code> and variance <code>\(\mathrm{Var}(X)=np(1-p)\)</code>.
Consequently, the estimator
<code>$$\hat\theta_n=2\frac{X}{n}-\alpha$$</code>
of <code>\(\theta\)</code> has mean <code>\(\mathrm{E}[\hat\theta_n]=\theta\)</code> and variance
<code>$$\begin{align*} \mathrm{Var}(\hat\theta_n) &= \frac{4}{n^2}\mathrm{Var}(X) \\ &= \frac{4p(1-p)}{n} \\ &\le \frac{1}{n} \end{align*}$$</code>
since <code>\(4p(1-p)\le1\)</code> for any <code>\(p\in[0,1]\)</code>.
Thus, <code>\(\hat\theta_n\)</code> is an unbiased estimator of <code>\(\theta\)</code> and becomes more precise as the sample size <code>\(n\)</code> grows.
We can quantify this precision using <a href="https://en.wikipedia.org/wiki/Chebyshev%27s_inequality">Chebyshev’s inequality</a>: for any <code>\(\varepsilon>0\)</code>, we have
<code>$$\Pr(\lvert\hat\theta_n-\theta\rvert\ge\varepsilon)\le\frac{\mathrm{Var}(\hat\theta_n)}{\varepsilon^2}$$</code>
and therefore
<code>$$\Pr(\lvert\hat\theta_n-\theta\rvert<\varepsilon)\ge1-\frac{1}{n\varepsilon^2}.$$</code>
Thus, for example, choosing <code>\(n\ge4000\)</code> guarantees that <code>\(\hat\theta_n\)</code> differs from <code>\(\theta\)</code> by no more than <code>\(\varepsilon=0.05\)</code> with probability 0.9.</p>
Research funding and collaboration
https://bldavies.com/blog/research-funding-collaboration/
Mon, 12 Oct 2020 00:00:00 +0000https://bldavies.com/blog/research-funding-collaboration/<p>Research is increasingly conducted by teams.
Consequently, there is growing interest in the mechanisms underlying research team formation.
In <a href="https://www.nber.org/papers/w27916">a new NBER working paper</a>, my co-authors and I explore one potential mechanism: participation in research funding contests.
Such contests may promote collaboration for several reasons:</p>
<ul>
<li>They require proposal team members to invest resources in planning collaborative projects;</li>
<li>They may help researchers screen for productive collaborators;</li>
<li>If better ideas are more likely to win funding then success signals that researchers’ shared ideas are worth pursuing.</li>
</ul>
<p>These arguments suggest that the members of more successful proposal teams are more likely to become co-authors.
We test this hypothesis empirically, using data from New Zealand.
Our data include Scopus publication records on New Zealand researchers and their international co-authors.
We link these records to data on applications to the <a href="https://www.royalsociety.org.nz/what-we-do/funds-and-opportunities/marsden">Marsden Fund</a>, the premier source of funding for basic research in New Zealand.</p>
<p>In our data, researchers with more successful Marsden Fund applications tended to have more co-authors.
However, this tendency may be driven by confounding factors, such as researchers’ ability to generate publishable research.
We control for such factors by analysing co-authorship dynamics econometrically.
Specifically, we use <a href="https://doi.org/10.1016/B978-0-12-811771-2.00008-0">dyadic regression</a> to estimate how the probability that pairs of researchers co-author in a given year varies with their observable characteristics.
Pairs were more likely to co-author in a given year if</p>
<ul>
<li>they had co-authored with each other recently,</li>
<li>they co-authored with others often,</li>
<li>they published in similar fields,</li>
<li>their prior publications attracted more citations, or</li>
<li>their prior citation histories differed.</li>
</ul>
<p>The fifth bullet implies negative <a href="https://bldavies.com/blog/assortative-mixing/">assortative mixing</a> among the researchers in our data, which we suspect arises due to inter-generational collaboration (e.g., professors working with graduate students and post-docs).</p>
<p>On average, pairs were 13.8 percentage points more likely to co-author in a given year if they co-submitted Marsden Fund proposals during the previous ten years than if they did not.
This co-authorship rate was not significantly larger among pairs who received funding.
However, increasing the lag between our outcome and explanatory variables delivers the opposite result: funding receipt, rather than proposal submission, promotes co-authorship.
As discussed in <a href="https://www.nber.org/papers/w27916">our paper</a>, these patterns suggest that the “treatment effect” of research funding contest participation on co-authorship is limited to successful participants only.</p>
<p>Our analysis has both technical and policy implications.
On the technical side, we discuss some empirical problems that arise when analysing co-authorship networks, offer solutions to these problems, and discuss how these solutions affect our inferences.
On the policy side, we show how science funding schemes can influence how researchers choose collaborators, which may have long-term effects on how science and innovation systems evolve.</p>
Relatedness, complexity and local growth redux
https://bldavies.com/blog/relatedness-complexity-local-growth-redux/
Thu, 10 Sep 2020 00:00:00 +0000https://bldavies.com/blog/relatedness-complexity-local-growth-redux/<p>“Relatedness, Complexity and Local Growth,” co-authored with <a href="https://motu.nz/about-us/people/dave-mare/">Dave Maré</a> while I worked at <a href="https://motu.nz">Motu</a>, has undergone peer review.
A revised version was <a href="https://doi.org/10.1080/00343404.2020.1802418">published online</a> today and will appear in a future issue of <em>Regional Studies</em>.</p>
<p>Dave and I present a measure of the relatedness between economic activities that is more robust to noisy employment data than measures used in previous studies (e.g., <a href="https://doi.org/10.1080/00343404.2018.1437900">Balland et al., 2019</a>; <a href="https://doi.org/10.1126/science.1144581">Hidalgo et al., 2007</a>; <a href="http://econ.geo.uu.nl/peeg/peeg1931.pdf">Rigby et al., 2019</a>).
We demonstrate this robustness using historical census data from New Zealand.
We also demonstrate that relatedness patterns do not significantly influence the employment dynamics described by those data.</p>
<p>Our analysis suggests that the <a href="https://doi.org/10.1007/978-3-319-96661-8_46">principle of relatedness</a> applies in large geographic areas only.
In our New Zealand data, the <a href="https://en.wikipedia.org/wiki/Economies_of_agglomeration">benefits of proximity</a> are more apparent in larger cities, where workers engaged in related activities interact more frequently.
Our paper highlights some of the challenges with operationalising place-based regional growth and innovation policies, such as the <a href="https://s3platform.jrc.ec.europa.eu/what-is-smart-specialisation-">“smart specialisation” policies</a> adopted in the European Union.</p>
<p>Read <a href="https://doi.org/10.1080/00343404.2020.1802418">the published article</a> (available under Open Access) for more details.</p>
COVID-19, lockdown and two-sided uncertainty
https://bldavies.com/blog/covid-19-lockdown-two-sided-uncertainty/
Fri, 21 Aug 2020 00:00:00 +0000https://bldavies.com/blog/covid-19-lockdown-two-sided-uncertainty/<p>When the COVID-19 pandemic began, the New Zealand government faced uncertainty around the virus’ health and economic consequences.
Amid this uncertainty, the government had two choices: enter lockdown immediately or delay its decision.
Delaying preserved the option to enter lockdown if its necessity became clearer.
However, a delayed lockdown would be less effective if many people caught the virus while the government waited for clarifying information.</p>
<p>We know now that the government chose to enter lockdown early.
Was this the best choice given information available at the time?
To help answer this question, <a href="https://motu.nz/about-us/people/arthur-grimes/">Arthur Grimes</a> and I analyse the government’s decision in <a href="https://doi.org/10.1080/00779954.2020.1806340">an article</a> published last week in the <em>New Zealand Economic Papers</em>.
Our analysis formalises, and builds on, ideas discussed in <a href="https://bldavies.com/blog/policymaking-under-uncertainty/">my blog post on policymaking under uncertainty</a> and <a href="https://www.newsroom.co.nz/pro/was-lockdown-the-right-choice">Arthur’s commentary on the lockdown at <em>Newsroom</em></a>.</p>
<p>Arthur and I present a two-period model of the government’s choice problem.
In the first period, the government decides whether to enter lockdown given random future health and economic outcomes.
These outcomes are realised in the second period, at which time the government decides whether to maintain or reverse its initial decision.
That initial decision influences the joint probability distribution of health and economic outcomes, and the payoffs associated with each choice in the second period.
The government’s decision rule in the first period is to choose the policy that generates the greatest net expected payoff, given the dynamic consequences of the policy chosen.</p>
<p>We allow payoffs to vary with a parameter capturing the government’s aversion to health risks vis-à-vis economic risks.
The chart below shows how this parameter affects the payoff from each choice available in the first period.
As health risk aversion rises, the government increasingly prefers policies that insure against bad health outcomes.
Consequently, the value of entering lockdown rises while the value of delaying falls.
The non-linearity in the payoff curves reflects the non-linearity of health and economic costs under each policy choice: delaying lockdown suppresses economic costs but exposes the government to potentially exponential health costs if the virus spreads rampantly.</p>
<p><img src="figures/plot-1.svg" alt=""></p>
<p>See Arthur and my article, “<a href="https://doi.org/10.1080/00779954.2020.1806340">COVID-19, lockdown and two-sided uncertainty</a>,” for further discussion.</p>
Lessons from Dave Maré
https://bldavies.com/blog/lessons-dave-mare/
Sun, 16 Aug 2020 00:00:00 +0000https://bldavies.com/blog/lessons-dave-mare/<p>Last week I finished up at <a href="https://motu.nz">Motu</a>, an economic research institute where I worked for two and half years.
During that time I learned a lot from <a href="https://motu.nz/about-us/people/dave-mare/">Dave Maré</a>, who taught me several techniques for conducting rigorous, intellectually honest empirical research.
This post describes three such techniques:
<a href="#state-your-predictions">stating your predictions</a>,
<a href="#have-weak-priors-and-strong-nulls">having weak priors and strong nulls</a>,
and <a href="#kill-off-the-variation">killing off the variation</a>.</p>
<h2 id="state-your-predictions">State your predictions</h2>
<p>The scientific method involves stating hypotheses <em>before</em> testing them.
Dave encourages this practice at a smaller scale: before plotting figures or printing regression tables, write down what you expect to see.</p>
<p>Stating your predictions forces you to think about how and why variables might be related.
For example, if I regressed workers’ wages on their years of education, I would expect to estimate a positive coefficient because education provides knowledge and skills that make people more employable.
Likewise, if I could control for natural ability then I would expect the coefficient on education to decrease because I would remove some endogeneity bias.
Forming these expectations (and their justifications) in advance makes my <a href="#have-weak-priors-and-strong-nulls">priors</a> explicit, making them easier to revise when confronted with new evidence.
It also insures against <em>ex post</em> rationalisations of the empirical patterns.</p>
<p>Stating your predictions also means you have two independent data sources—your predictions and your figures/tables—that you can compare to identify and correct mistakes.
For example, if I estimated a negative relationship between education and wages, I would want to make sure the disagreement between my intuition and my estimate was not due to errant definitions of the variables in my data.</p>
<h2 id="have-weak-priors-and-strong-nulls">Have weak priors and strong nulls</h2>
<p>Priors are beliefs held before gathering new evidence.
In empirical research, we usually derive priors from intuitive or logical reasoning (e.g., “education provides knowledge and skills that make people more employable”).
However, the world is more complicated than can be described by intuition and logic; people behave in unexpected and unpredictable ways.
Consequently, our priors can be incorrect or incomplete.
To have “weak priors” is to acknowledge such ignorance and to let your beliefs be guided by empirical evidence rather than by fallible reasoning.</p>
<p>However, empirical evidence comes in varying strengths.
To have “strong nulls” is to graduate from “ignorant” to “informed” only when supplied with strong evidence.
For example, if significant relationships persist after controlling for potentially confounding factors then those relationships are likely to reflect the true data-generating process.</p>
<h2 id="kill-off-the-variation">Kill off the variation</h2>
<p>Empirical models describe relationships between variables.
These relationships may not be first-order: the mechanisms that we think operate, and that our models aim to capture, may not be central to the stories playing out in our data.
To determine the centrality of our hypothesised mechanisms, Dave suggests trying to “kill off the variation:” add explanatory variables until the coefficients on our covariates of interest become insignificant.</p>
<p>For example, in “<a href="https://doi.org/10.1080/00343404.2020.1802418">Relatedness, Complexity and Local Growth</a>,” Dave and I analyse the relationship between local activity growth rates and several covariates that capture the prevalence of local employee interactions.
In theory, such interactions foster the growth of “complex” activities that build on existing local strengths.
However, in our data, most of the variation in local activity growth is explained by the growth experienced by the city and activity as a whole, and our chosen covariates provide no additional explanatory power.
Thus, while employee interactions may influence employment dynamics at the margin, such interactions are not central to the story of how New Zealand cities evolved during our period of study.</p>
Product-maximising partitions
https://bldavies.com/blog/product-maximising-partitions/
Wed, 08 Jul 2020 00:00:00 +0000https://bldavies.com/blog/product-maximising-partitions/<p>Let <code>\(\newcommand{\N}{\mathbb{N}}\N=\{1,2,\ldots\}\)</code> be the set of positive integers.
A <em>partition</em> of <code>\(n\in\N\)</code> is a way of writing <code>\(n\)</code> as a sum of positive integers, called <em>parts</em>.
For example, <code>\(1+2+3\)</code> is a partition of <code>\(6\)</code>, with parts <code>\(1\)</code>, <code>\(2\)</code>, and <code>\(3\)</code>.
Partitions are unique up to rearrangement: <code>\(1+2+3\)</code> and <code>\(3+2+1\)</code> are the same partition, but <code>\(1+2+3\)</code> and <code>\(3+3\)</code> are different partitions.</p>
<p>This post discusses the following problem:</p>
<blockquote>
<p>Let <code>\(n\ge2\)</code> be a positive integer.
Find a partition of <code>\(n\)</code> whose parts have maximum product.</p>
</blockquote>
<p>For example, the parts in <code>\(1+2+3\)</code> have product <code>\(1\times2\times3=6\)</code>, while the parts in <code>\(3+3\)</code> have product <code>\(3\times3=9\)</code>.
Our goal is to find a product-maximising partition for arbitrary <code>\(n\)</code>.</p>
<p>Let <code>\(x_1+x_2+\cdots+x_k\)</code> be a partition of <code>\(n\)</code>.
If <code>\(x_1=1\)</code> then <code>\(k\ge2\)</code> (since <code>\(n\ge2\)</code>) and
<code>$$\begin{align} \prod_{i=1}^kx_i &= 1\times x_2\times\prod_{i=3}^kx_i \\ &< (1+x_2)\times\prod_{i=3}^kx_i \end{align}$$</code>
because the <code>\(x_i\)</code> are strictly positive.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>
Thus, replacing the partition <code>\(x_1+x_2+\cdots+x_k\)</code> with <code>\((1+x_2)+x_3+\cdots+x_k\)</code> delivers a greater product.
Since the <code>\(x_i\)</code> can be rearranged arbitrarily, it follows that product-maximising partitions contain no parts equal to one.
Similarly, if <code>\(x_1>4\)</code> then
<code>$$\begin{align} \prod_{i=1}^kx_i &= x_1\times\prod_{i=2}^kx_i \\ &< 3(x_1-3)\times\prod_{i=2}^kx_i, \end{align}$$</code>
so we can obtain a greater product by replacing <code>\(x_1+x_2+\cdots+x_k\)</code> with <code>\(3+(x_1-3)+x_2+\cdots+x_k\)</code>.
It follows that product-maximising partitions contain no parts greater than four.
But <code>\(2\times2=4\)</code> and <code>\(2+2=4\)</code>, so we can replace each four with two twos without reducing the parts’ product.
Thus, we can obtain a product-maximising partition using only twos and threes.
Finally, if a partition contains three twos then we should replace them with two threes, since <code>\(2+2+2=3+3\)</code> but <code>\(2^3=8<9=3^2\)</code>.</p>
<p>To summarise, we can obtain a product-maximising partition using only twos and threes, with as many threes as possible.
Letting <code>\(n=3q+r\)</code> for some <code>\(q\in\N\cup\{0\}\)</code> and <code>\(r\in\{0,1,2\}\)</code>, the maximum product we can obtain is
<code>$$P(n)=\begin{cases}3^q&\text{if}\ r=0\\ 2^2\times3^{q-1}&\text{if}\ r=1\\ 2\times3^q&\text{if}\ r=2.\end{cases}$$</code>
We can approximate this solution by <a href="https://en.wikipedia.org/wiki/Relaxation_%28approximation%29">relaxing</a> the integrality constraint on the <code>\(x_i\)</code>.
For any given <code>\(k\)</code>, we can find the vector <code>\(x^*\)</code> that solves
<code>$$\newcommand{\R}{\mathbb{R}}\max_{x\in\R_+^k}\prod_{i=1}^kx_i\ \text{subject to}\ \sum_{i=1}^kx_i=n \tag{1},$$</code>
where <code>\(\R_+\)</code> is the set of positive real numbers.
This vector has <code>\(x_i^*=n/k\)</code> for each <code>\(i\in\{1,2,\ldots,k\}\)</code>, so that <code>\(\prod_{i=1}^kx_i^*=(n/k)^k\)</code>.<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>
If there was no integrality constraint on <code>\(k\)</code> then we could maximise <code>\((n/k)^k\)</code> by choosing <code>\(k=n/e\)</code>, where <code>\(e\approx2.718\)</code> is Euler’s constant.
But <code>\(k\)</code> must be an integer, so we should round it to the nearest integer in whatever direction delivers the greatest value of <code>\((n/k)^k\)</code>.
Doing so delivers an estimate
<code>$$\hat{P}(n)=\max\left\{\left(\frac{n}{\lfloor n/e\rfloor}\right)^{\lfloor n/e\rfloor},\left(\frac{n}{\lceil n/e\rceil}\right)^{\lceil n/e\rceil}\right\}$$</code>
of <code>\(P(n)\)</code>, where <code>\(x\mapsto\lfloor x\rfloor\)</code> and <code>\(x\mapsto\lceil x\rceil\)</code> are the floor (“round down”) and ceiling (“round up”) functions.</p>
<p>The table below compares <code>\(P(n)\)</code> and <code>\(\hat{P}(n)\)</code> for various <code>\(n\)</code>.
Since <code>\(\{2,3\}\subset\mathbb{R}_+\)</code>, the partition of <code>\(n\)</code> using twos and as many threes as possible is a feasible, but not necessarily optimal, solution to <code>\((1)\)</code>.
Thus <code>\(P(n)\le\hat{P}(n)\)</code> for each <code>\(n\ge2\)</code>.
The multiplicative error <code>\(\hat{P}(n)/P(n)\)</code> grows exponentially with <code>\(n\)</code> because the exponent <code>\(k\in\{\lfloor n/e\rfloor,\lceil n/e\rceil\}\)</code> of <code>\((n/k)^k\)</code> grows (increasingly linearly) with <code>\(n\)</code>, amplifying the error in the approximation <code>\(n/k\sim e\)</code> to each part in the partition underlying <code>\(P(n)\)</code>.</p>
<table>
<thead>
<tr>
<th align="center"><code>\(n\)</code></th>
<th align="center"><code>\(P(n)\)</code></th>
<th align="center"><code>\(\hat{P}(n)\)</code></th>
<th align="center"><code>\(\hat{P}(n)/P(n)\)</code></th>
</tr>
</thead>
<tbody>
<tr>
<td align="center">2</td>
<td align="center">2</td>
<td align="center">2</td>
<td align="center">1.00</td>
</tr>
<tr>
<td align="center">3</td>
<td align="center">3</td>
<td align="center">3</td>
<td align="center">1.00</td>
</tr>
<tr>
<td align="center">4</td>
<td align="center">4</td>
<td align="center">4</td>
<td align="center">1.00</td>
</tr>
<tr>
<td align="center">5</td>
<td align="center">6</td>
<td align="center">6.25</td>
<td align="center">1.04</td>
</tr>
<tr>
<td align="center">10</td>
<td align="center">36</td>
<td align="center">39.06</td>
<td align="center">1.09</td>
</tr>
<tr>
<td align="center">50</td>
<td align="center">8.61×10<sup>7</sup></td>
<td align="center">9.70×10<sup>7</sup></td>
<td align="center">1.13</td>
</tr>
<tr>
<td align="center">100</td>
<td align="center">7.41×10<sup>15</sup></td>
<td align="center">9.47×10<sup>15</sup></td>
<td align="center">1.28</td>
</tr>
<tr>
<td align="center">500</td>
<td align="center">3.19×10<sup>79</sup></td>
<td align="center">7.66×10<sup>79</sup></td>
<td align="center">2.40</td>
</tr>
<tr>
<td align="center">1,000</td>
<td align="center">1.01×10<sup>159</sup></td>
<td align="center">5.86×10<sup>159</sup></td>
<td align="center">5.78</td>
</tr>
</tbody>
</table>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>If <code>\(j>k\)</code> then <code>\(\prod_{i=j}^kx_i=1\)</code> <a href="https://en.wikipedia.org/wiki/Empty_product">by convention</a>. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>One can derive <code>\(x_i^*=n/k\)</code> using the <a href="https://en.wikipedia.org/wiki/Lagrange_multiplier">method of Lagrange multipliers</a>. <a href="#fnref:2" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Understanding selection bias
https://bldavies.com/blog/understanding-selection-bias/
Fri, 03 Jul 2020 00:00:00 +0000https://bldavies.com/blog/understanding-selection-bias/<p>Suppose we have data <code>\(\{(x_i,y_i):i\in\{1,2,\ldots,n\}\}\)</code> generated by the process
<code>$$y_i=\beta x_i+u_i,$$</code>
where the <code>\(u_i\)</code> are random errors with zero means, equal variances, and zero correlations with the <code>\(x_i\)</code>.
This data generating process (DGP) satisfies the <a href="https://en.wikipedia.org/wiki/Gauss%E2%80%93Markov_theorem">Gauss-Markov</a> assumptions, so we can obtain an unbiased estimate <code>\(\hat\beta\)</code> of the coefficient <code>\(\beta\)</code> using ordinary least squares (OLS).</p>
<p>Now suppose we restrict our data to observations with <code>\(x_i\ge0\)</code> or <code>\(y_i\ge0\)</code>.
How will these restrictions change <code>\(\hat\beta\)</code>?</p>
<p>To investigate, let’s create some toy data:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">dplyr</span><span class="p">)</span>
<span class="n">n</span> <span class="o"><-</span> <span class="m">100</span>
<span class="nf">set.seed</span><span class="p">(</span><span class="m">0</span><span class="p">)</span>
<span class="n">df</span> <span class="o"><-</span> <span class="nf">tibble</span><span class="p">(</span><span class="n">x</span> <span class="o">=</span> <span class="nf">rnorm</span><span class="p">(</span><span class="n">n</span><span class="p">),</span> <span class="n">u</span> <span class="o">=</span> <span class="nf">rnorm</span><span class="p">(</span><span class="n">n</span><span class="p">),</span> <span class="n">y</span> <span class="o">=</span> <span class="n">x</span> <span class="o">+</span> <span class="n">u</span><span class="p">)</span>
</code></pre></div><p>Here <code>\(x_i\)</code> and <code>\(u_i\)</code> are standard normal random variables, and <code>\(y_i=x_i+u_i\)</code> for each observation <code>\(i\in\{1,2,\ldots,100\}\)</code>.
Thus <code>\(\beta=1\)</code>.
The OLS estimate of <code>\(\beta\)</code> is
<code>$$\DeclareMathOperator{\Cov}{Cov}\DeclareMathOperator{\Var}{Var}\hat\beta=\frac{\Cov(x,y)}{\Var(x)},$$</code>
where <code>\(x=(x_1,x_2,\ldots,x_{100})\)</code> and <code>\(y=(y_1,y_2,\ldots,y_{100})\)</code> are data vectors, <code>\(\Cov\)</code> is the covariance operator, and <code>\(\Var\)</code> is the variance operator.
For these data, we have</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">cov</span><span class="p">(</span><span class="n">df</span><span class="o">$</span><span class="n">x</span><span class="p">,</span> <span class="n">df</span><span class="o">$</span><span class="n">y</span><span class="p">)</span> <span class="o">/</span> <span class="nf">var</span><span class="p">(</span><span class="n">df</span><span class="o">$</span><span class="n">x</span><span class="p">)</span>
</code></pre></div><pre><code>## [1] 1.138795
</code></pre><p>as our estimate with no selection.</p>
<p>Next, let’s introduce our selection criteria:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">df</span> <span class="o"><-</span> <span class="n">df</span> <span class="o">%>%</span>
<span class="n">tidyr</span><span class="o">::</span><span class="nf">crossing</span><span class="p">(</span><span class="n">criterion</span> <span class="o">=</span> <span class="nf">c</span><span class="p">(</span><span class="s">'x >= 0'</span><span class="p">,</span> <span class="s">'y >= 0'</span><span class="p">))</span> <span class="o">%>%</span>
<span class="nf">rowwise</span><span class="p">()</span> <span class="o">%>%</span> <span class="c1"># eval is annoying to vectorise</span>
<span class="nf">mutate</span><span class="p">(</span><span class="n">selected</span> <span class="o">=</span> <span class="nf">eval</span><span class="p">(</span><span class="nf">parse</span><span class="p">(</span><span class="n">text</span> <span class="o">=</span> <span class="n">criterion</span><span class="p">)))</span> <span class="o">%>%</span>
<span class="nf">ungroup</span><span class="p">()</span>
<span class="n">df</span>
</code></pre></div><pre><code>## # A tibble: 200 x 5
## x u y criterion selected
## <dbl> <dbl> <dbl> <chr> <lgl>
## 1 -2.22 -0.0125 -2.24 x >= 0 FALSE
## 2 -2.22 -0.0125 -2.24 y >= 0 FALSE
## 3 -1.56 -1.12 -2.68 x >= 0 FALSE
## 4 -1.56 -1.12 -2.68 y >= 0 FALSE
## 5 -1.54 0.577 -0.963 x >= 0 FALSE
## 6 -1.54 0.577 -0.963 y >= 0 FALSE
## 7 -1.44 -1.39 -2.83 x >= 0 FALSE
## 8 -1.44 -1.39 -2.83 y >= 0 FALSE
## 9 -1.43 -0.543 -1.97 x >= 0 FALSE
## 10 -1.43 -0.543 -1.97 y >= 0 FALSE
## # … with 190 more rows
</code></pre><p>Now <code>df</code> contains two copies of each observation—one for each selection criterion—and an indicator for whether the observation is selected by each criterion.
We can use <code>df</code> to estimate OLS coefficients and their standard errors among observations with <code>\(x_i\ge0\)</code> and <code>\(y_i\ge0\)</code>:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">df</span> <span class="o">%>%</span>
<span class="nf">filter</span><span class="p">(</span><span class="n">selected</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">group_by</span><span class="p">(</span><span class="n">criterion</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">summarise</span><span class="p">(</span><span class="n">n</span> <span class="o">=</span> <span class="nf">n</span><span class="p">(),</span>
<span class="n">estimate</span> <span class="o">=</span> <span class="nf">cov</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">)</span> <span class="o">/</span> <span class="nf">var</span><span class="p">(</span><span class="n">x</span><span class="p">),</span>
<span class="n">std.error</span> <span class="o">=</span> <span class="nf">sd</span><span class="p">(</span><span class="n">y</span> <span class="o">-</span> <span class="n">estimate</span> <span class="o">*</span> <span class="n">x</span><span class="p">)</span> <span class="o">/</span> <span class="nf">sqrt</span><span class="p">(</span><span class="n">n</span><span class="p">))</span>
</code></pre></div><pre><code>## # A tibble: 2 x 4
## criterion n estimate std.error
## <chr> <int> <dbl> <dbl>
## 1 x >= 0 48 1.02 0.136
## 2 y >= 0 47 0.356 0.110
</code></pre><p>The OLS estimate among observations with <code>\(x_i\ge0\)</code> approximates the true value <code>\(\beta=1\)</code> well.
However, the estimate among observations with <code>\(y_i\ge0\)</code> is much smaller than one.
We can confirm this visually:
<img src="figures/plot-1.svg" alt=""></p>
<p>What’s going on?
Why do we get biased OLS estimates of <code>\(\beta\)</code> among observations with <code>\(y_i\ge0\)</code> but not among observations with <code>\(x_i\ge0\)</code>?</p>
<p>The key is to think about the errors <code>\(u_i\)</code> in each case.
Since the <code>\(x_i\)</code> and <code>\(u_i\)</code> are independent, selecting observations with <code>\(x_i\ge0\)</code> leaves the distributions of the <code>\(u_i\)</code> unchanged—they still have zero means, equal variances, and zero correlations with the <code>\(x_i\)</code>.
Thus, the Gauss-Markov assumptions still hold and we still obtain unbiased OLS estimates of <code>\(\beta\)</code>.</p>
<p>In contrast, the <code>\(x_i\)</code> and <code>\(u_i\)</code> are negatively correlated among observations with <code>\(y_i\ge0\)</code>.
To see why, notice that if <code>\(y_i=x_i+u_i\)</code> then <code>\(y_i\ge0\)</code> if and only if <code>\(x_i\ge-u_i\)</code>.
So if <code>\(x_i\)</code> is low then <code>\(u_i\)</code> must be high (and vice versa) for the observation to be selected.
Thus, among selected observations, we have
<code>$$u_i=\rho x_i+\varepsilon_i,$$</code>
where <code>\(\rho<0\)</code> indexes (and, in this case, equals) the correlation between the <code>\(x_i\)</code> and <code>\(u_i\)</code>, and where the residuals <code>\(\varepsilon_i\)</code> are uncorrelated with the <code>\(x_i\)</code>.
Our DGP then becomes
<code>$$y_i=(\beta+\rho)x_i+\varepsilon_i.$$</code>
The <code>\(\varepsilon_i\)</code> have equal variances (equal to <code>\(1+\rho^2\)</code> in this case) and, again, are uncorrelated with the <code>\(x_i\)</code>.
Therefore, the OLS estimate
<code>$$\hat\rho=\frac{\Cov(u,x)}{\Var(x)}$$</code>
of <code>\(\rho\)</code> is unbiased<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>, and for our toy data equals <code>\(\hat\rho\approx-0.644\)</code> among observations with <code>\(y_i\ge0\)</code>.
Subtracting <code>\(\hat\rho\)</code> from <code>\(\hat\beta\)</code> then gives
<code>$$\begin{align} \hat\beta-\hat\rho &\approx 0.356 - (-0.644) \\ &= 1, \end{align}$$</code>
recovering the true value <code>\(\beta=1\)</code>.</p>
<p>The table below reports 95% confidence intervals for <code>\(\hat\beta\)</code>, <code>\(\hat\rho\)</code>, and <code>\((\hat\beta-\hat\rho)\)</code>, estimated by simulating the DGP <code>\(y_i=x_i+u_i\)</code> described above 100 times.
The table confirms that the OLS estimate <code>\(\hat\beta\)</code> of <code>\(\beta=1\)</code> is unbiased among observations with <code>\(x_i\ge0\)</code> but biased negatively among observations with <code>\(y_i\ge0\)</code>.</p>
<table>
<thead>
<tr>
<th align="left">Observations</th>
<th align="center"><code>\(\hat\beta\)</code></th>
<th align="center"><code>\(\hat\rho\)</code></th>
<th align="center"><code>\(\hat\beta-\hat\rho\)</code></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">All</td>
<td align="center">1.005 ± 0.002</td>
<td align="center">0.005 ± 0.002</td>
<td align="center">1.000 ± 0.000</td>
</tr>
<tr>
<td align="left">With <code>\(x_i\ge0\)</code></td>
<td align="center">1.001 ± 0.004</td>
<td align="center">0.001 ± 0.004</td>
<td align="center">1.000 ± 0.000</td>
</tr>
<tr>
<td align="left">With <code>\(y_i\ge0\)</code></td>
<td align="center">0.547 ± 0.003</td>
<td align="center">-0.453 ± 0.003</td>
<td align="center">1.000 ± 0.000</td>
</tr>
</tbody>
</table>
<p>The estimate <code>\(\hat\beta\)</code> always differs from <code>\(\beta\)</code> by <code>\(\hat\rho\)</code>, which is significantly non-zero among observations with <code>\(y_i\ge0\)</code>.
However, this pattern is not useful empirically because we generally don’t observe the <code>\(u_i\)</code> and so can’t estimate <code>\(\hat\rho\)</code> to back out the true value of <code>\(\beta=\hat\beta-\hat\rho\)</code>.
Instead, we may use the <a href="https://en.wikipedia.org/wiki/Heckman_correction">Heckman correction</a> to adjust for the bias introduced through non-random selection.</p>
<p>In empirical settings, selecting observations with <code>\(x_i\ge0\)</code> may lead to biased estimates when (i) there is heterogeneity in the relationship between <code>\(y_i\)</code> and <code>\(x_i\)</code> across observations <code>\(i\)</code>, and (ii) OLS is used to estimate an <a href="https://en.wikipedia.org/wiki/Average_treatment_effect">average treatment effect</a>.<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>
In particular, if the <code>\(x_i\)</code> are correlated with the observation-specific treatment effects then restricting to observations with <code>\(x_i\ge0\)</code> changes the distribution, and hence the mean, of those effects non-randomly.</p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>We can rewrite <code>\(\varepsilon_i=\alpha+(\varepsilon_i-\alpha)\)</code>, where <code>\(\alpha\)</code> is the mean of the <code>\(\varepsilon_i\)</code>, and where the <code>\((\varepsilon_i-\alpha)\)</code> have zero means, equal variances, and zero correlations with the <code>\(x_i\)</code>. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>Thanks to <a href="http://shakkednoy.com">Shakked</a> for pointing this out. <a href="#fnref:2" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Modelling bacterial extinction
https://bldavies.com/blog/modelling-bacterial-extinction/
Sun, 14 Jun 2020 00:00:00 +0000https://bldavies.com/blog/modelling-bacterial-extinction/<p>This week’s <a href="https://fivethirtyeight.com/features/how-long-will-the-bacterial-colony-last/">Riddler Classic</a> poses a question about bacteria (paraphrased for brevity):</p>
<blockquote>
<p>Each bacterium in a colony splits into two copies with probability <code>\(p\)</code> and dies with probability <code>\((1-p)\)</code>.
If the colony starts with one bacterium, what is the probability that the colony survives forever?</p>
</blockquote>
<p>We can model the colony’s size as a <a href="https://en.wikipedia.org/wiki/Galton%E2%80%93Watson_process">Galton-Watson process</a>.
Let <code>\(X_t\)</code> be the colony’s size in generation <code>\(t\in\{1,2,\ldots\}\)</code> and let <code>\(Y_{it}\)</code> be the number of offspring generated by bacterium <code>\(i\in\{1,2,\ldots,X_t\}\)</code>.
The <code>\(Y_{it}\)</code> are independently and identically distributed, with
<code>$$\Pr(Y_{it}=y)=\begin{cases} p & \text{if}\ y=2 \\ 1-p & \text{if}\ y=0 \\ 0 & \text{otherwise} \end{cases}$$</code>
for each <code>\(i\)</code> and <code>\(t\)</code>.
The colony’s size grows according to
<code>$$X_{t+1}=\sum_{i=1}^{X_t}Y_{it}$$</code>
with <code>\(X_1=1\)</code>.
Our goal is to compute
<code>$$\lim_{t\to\infty}\Pr(X_t>0)=1-\lim_{t\to\infty}q_t,$$</code>
where <code>\(q_t\equiv\Pr(X_t=0)\)</code> is the probability that the colony is extinct by generation <code>\(t\)</code>.</p>
<p>We can compute <code>\(q_t\)</code> by conditioning on <code>\(Y_{11}\)</code>, the number of offspring generated by the first bacterium.
If <code>\(Y_{11}=0\)</code> then the colony is extinct from the second generation onwards.
However, if <code>\(Y_{11}=2\)</code> then there are two sub-colonies in the second generation that must be extinct in <code>\((t-1)\)</code> generations if the whole colony is extinct by generation <code>\(t\)</code>.
These sub-colonies grow independently, so the probability that both are extinct in <code>\((t-1)\)</code> generations is <code>\(q_{t-1}^2\)</code>.
Thus, by the <a href="https://en.wikipedia.org/wiki/Law_of_total_probability">law of total probability</a>, we have
<code>$$\begin{align} q_t &= \Pr(X_t=0\,\vert\,Y_{11}=0)\Pr(Y_{11}=0)+\Pr(X_t=0\,\vert\,Y_{11}=2)\Pr(Y_{11}=2) \\ &= 1\times(1-p)+q_{t-1}^2\times p \\ &= 1-p+pq_{t-1}^2 \end{align}$$</code>
for <code>\(t\ge2\)</code>.
Defining <code>\(q\equiv\lim_{t\to\infty}q_t\)</code> and taking limits as <code>\(t\to\infty\)</code> gives<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>
<code>$$q=1-p+pq^2,$$</code>
which has solutions
<code>$$\begin{align} \newcommand{\abs}[1]{\lvert#1\rvert} q &= \frac{1\pm\sqrt{1-4p(1-p)}}{2p} \\ &= \frac{1\pm\sqrt{(2p-1)^2}}{2p} \\ &= \frac{1\pm\abs{2p-1}}{2p}. \end{align}$$</code>
The larger solution exceeds unity when <code>\(p<0.5\)</code>, which we cannot have because <code>\(q\)</code> is a probability.
Thus
<code>$$\lim_{t\to\infty}\Pr(X_t>0)=1-\frac{1-\abs{2p-1}}{2p}.$$</code>
For example, if <code>\(p=0.8\)</code> then the colony survives forever with probability
<code>$$1-\frac{1-\abs{2\times0.8-1}}{2\times0.8}=0.75.$$</code>
If <code>\(p<0.5\)</code> then extinction is guaranteed because each bacterium generates fewer than one offspring on average.</p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>The function <code>\(x\mapsto x^2\)</code> is continuous and so preserves limits. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Applying to economics PhD programs
https://bldavies.com/blog/applying-economics-phd-programs/
Sat, 13 Jun 2020 00:00:00 +0000https://bldavies.com/blog/applying-economics-phd-programs/<p>Last year I applied to several economics PhD programs at elite universities and business schools.
I applied to twelve programs (nine in economics and three in business), was accepted by three, and <a href="https://bldavies.com/blog/stanford/">chose to study at Stanford</a>.
This post describes my experience with the application process and offers some advice to future applicants.</p>
<h2 id="contents">Contents</h2>
<ul>
<li><a href="#before-applying">Before applying</a>
<ul>
<li><a href="#earning-a-degree">Earning a degree</a></li>
<li><a href="#gaining-research-experience">Gaining research experience</a></li>
<li><a href="#completing-the-gre">Completing the GRE</a></li>
<li><a href="#choosing-where-to-apply">Choosing where to apply</a></li>
</ul>
</li>
<li><a href="#application-materials">Application materials</a>
<ul>
<li><a href="#transcripts">Transcripts</a></li>
<li><a href="#gre-score-reports">GRE score reports</a></li>
<li><a href="#recommendation-letters">Recommendation letters</a></li>
<li><a href="#statements-of-purpose">Statements of purpose</a></li>
<li><a href="#writing-samples">Writing samples</a></li>
<li><a href="#diversity-statements">Diversity statements</a></li>
</ul>
</li>
<li><a href="#after-applying">After applying</a>
<ul>
<li><a href="#waiting-for-responses">Waiting for responses</a></li>
<li><a href="#interviews">Interviews</a></li>
<li><a href="#admissions-decisions">Admissions decisions</a></li>
</ul>
</li>
<li><a href="#further-reading">Further reading</a></li>
</ul>
<h2 id="before-applying">Before applying</h2>
<p>The programs I applied to accepted applications between late September and early December.
However, these applications depended on tasks completed earlier: <a href="#earning-a-degree">earning a degree</a>, <a href="#gaining-research-experience">gaining research experience</a>, <a href="#completing-the-gre">completing the Graduate Record Exam (GRE)</a>, and <a href="#choosing-where-to-apply">choosing where to apply</a>.</p>
<h3 id="earning-a-degree">Earning a degree</h3>
<p>Every program required that I held the equivalent of a four-year bachelor’s degree or higher.
Most stated explicitly that a master’s was not necessary.
Some stated explicitly that applicants need not have a major in economics, but some prior coursework (e.g., intermediate microeconomics) helps to signal interest and familiarity.
Most stated explicitly that applicants should be comfortable with undergraduate-level calculus, linear algebra, and probability and statistics.</p>
<h3 id="gaining-research-experience">Gaining research experience</h3>
<p>While not required explicitly, my impression is that most successful applicants to top programs have some research experience.
Such experience helps demonstrate that you know what research is and can conduct it successfully.
Moreover, everyone applying to top programs has stellar grades, so having research experience helps you stand out.</p>
<p>Thankfully, there are many ways to gain research experience.
I have four recommendations.</p>
<p>First, write an honours or master’s thesis.
Doing so provides early evidence that you’re interested in research and can work independently.</p>
<p>Second, work with professors while studying.
The University of Canterbury (UC), where I completed my bachelor’s degree, <a href="https://www.canterbury.ac.nz/get-started/summer-school/summer-scholarships/">offers scholarships</a> to work with professors during summer breaks.
I won one to work with Richard Watt on a theoretical project related to insurance pricing.
Completing the project gave me experience to discuss in my <a href="#statements-of-purpose">statement of purpose</a> and gave Richard something to discuss in his <a href="#recommendation-letters">recommendation letter</a>.</p>
<p>Third, work at a research-oriented organisation after finishing your bachelor’s or master’s.
In New Zealand, the best place is <a href="https://motu.nz">Motu</a> or <a href="https://www.rbnz.govt.nz">the Reserve Bank</a>, depending on whether you’re more interested in microeconomics or macroeconomics.
Working at Motu has improved my technical and research skills, and given me experience working with respected economists on substantive research projects.
It has also helped clarify what a “research career” looks like and whether it’s something I want to pursue.</p>
<p>Finally, consider completing a pre-doctoral fellowship at an elite university.
These fellowships typically last one or two years, and involve assisting professors with their research.
Pre-doctoral fellowships deliver similar benefits to working at places like Motu.
However, some fellowships (e.g., those offered by <a href="https://opportunityinsights.org/joinourteam/">Opportunity Insights</a> at Harvard and <a href="https://siepr.stanford.edu/research/student-opportunities/predoctoral-fellowship">SIEPR</a> at Stanford) allow you to take graduate courses while working, further strengthening your profile.
Moreover, working with well-known economists at elite universities (and impressing them) helps you gain strong recommendation letters.</p>
<h3 id="completing-the-gre">Completing the GRE</h3>
<p>All programs required official scores from <a href="https://en.wikipedia.org/wiki/Graduate_Record_Examinations">the (general) GRE</a>, a standardised test comprising three sections: quantitative reasoning, verbal reasoning, and analytical writing.
The test can be attempted multiple times.
Programs consider only your highest score on each section.</p>
<p>I sat the GRE once, in 2018.
The test took about four hours.
The quantitative and verbal reasoning sections each comprised two sets of 20 multi-choice questions.
The quantitative section was mostly high school-level mathematics.
(New Zealanders: think NCEA Level 1 or 2.)
The verbal section tested reading comprehension and vocabulary.
The analytical writing section comprised two short, typed essay responses to prompts given during the test.
I think anyone who recently earned a bachelor’s degree in economics could do well on the test with 2–4 weeks of study.</p>
<p><a href="https://doi.org/10.1080/00220485.2020.1731385">Jones et al. (2020)</a> survey graduate admissions coordinators, who report placing more emphasis on quantitative reasoning scores than verbal reasoning scores when evaluating applicants.
Both scores are less important at higher ranked programs because applicants to such programs tend to have higher scores, leaving less variation for identifying applicants’ relative abilities.
For example, Harvard’s economics department <a href="https://economics.harvard.edu/pages/admissions">states</a> that admitted candidates’ quantitative scores range “in the 97th percentile.”
I scored in the 94th percentile and would have resat the test if I had scored any lower.</p>
<h3 id="choosing-where-to-apply">Choosing where to apply</h3>
<p>I applied to most programs in the “top 10,” and a few more specialised programs that matched my interests and geographic preferences.
I figured that if I was going to move overseas, away from my family and friends, then I better go somewhere excellent.
If I had a weaker technical background or less research experience then I might have aimed lower.</p>
<p>Beyond this “aim high” strategy, I have two recommendations.</p>
<p>First, apply to as many programs as you can afford and would attend.
The marginal effort cost of applying to each program falls quickly after preparing your first set of application materials.
Moreover, although the application fees can sting, they are small compared to the expected gain in life satisfaction from being admitted.</p>
<p>Second, apply to programs at business schools as well as economic departments.
Chicago, Harvard, Northwestern, NYU, and Stanford’s business schools all offer excellent economics-focused PhD programs.
They provide similar technical training and faculty access to “traditional” programs.
However, business schools tend to offer larger stipends and require less teaching than economics departments.
Business schools tend to make fewer offers, but they also tend to receive fewer applications.</p>
<h2 id="application-materials">Application materials</h2>
<p>All of the programs I applied to required the following materials:</p>
<ul>
<li>An application form, submitted online;</li>
<li>Copies of my academic <a href="#transcripts">transcripts</a>;</li>
<li>Official <a href="#gre-score-reports">GRE score reports</a>;</li>
<li><a href="#recommendation-letters">Recommendation letters</a>;</li>
<li>A CV;</li>
<li>A <a href="#statements-of-purpose">statement of purpose</a>.</li>
</ul>
<p>Most programs required a <a href="#writing-samples">writing sample</a>.
Some required a (short) <a href="#diversity-statements">diversity statement</a>.
All required payment of a 75–125 USD application fee.</p>
<p>Overall, it took about a month to prepare my application materials and about a day to tailor them to each program.
To track my progress and help manage my time, I maintained a checklist of form sections to complete and materials to upload.</p>
<h3 id="transcripts">Transcripts</h3>
<p>Stanford asked for official copies of my academic transcripts.
All other programs accepted “unofficial” copies.
I ordered a digital copy from UC, which set up a <a href="https://www.myequals.ac.nz">My eQuals</a> account with my transcript uploaded as a PDF and certified by the UC registrar.
I shared this certified version with Stanford, saving me about 190 USD worth of third-party certification fees.
I downloaded the PDF version from My eQuals and used it as the unofficial copy for my other applications.</p>
<p>In addition to transcripts, some schools asked for more information about my prior coursework.
Harvard and MIT asked for comprehensive lists of course codes and titles, dates completed, grades obtained, and textbooks used.
Other programs asked for similar information but only for the handful of “most advanced” courses I’d taken in economics, mathematics, and statistics.
Stanford asked me to match the courses I’d taken with courses offered at Stanford.
The matching took a while because the courses I took at UC often matched Stanford courses in different subject areas and at different degree levels.</p>
<p>New Zealand universities use a nine-point GPA system, whereas the universities I applied to use a four-point system.
Some programs asked me to report my GPA on its original scale, some asked me to convert it to the four-point scale, and some asked me to leave the GPA field blank.
Overall, the difference in systems didn’t seem to be problematic.</p>
<h3 id="gre-score-reports">GRE score reports</h3>
<p>All programs asked for official GRE score reports.
The testing fee (205 USD) covers the cost of sending scores to up to four institutions, nominated on test day.
Sending scores to additional institutions costs 27 USD per institution.
I didn’t nominate any schools on test day because I wasn’t sure whether I would need to resit the test, or whether sending low scores would hurt my admissions chances even if I resat the test and performed better.
Once I sent my score reports, most programs confirmed receipt after about a week.</p>
<h3 id="recommendation-letters">Recommendation letters</h3>
<p>All programs asked me to nominate three recommendation letter writers.
I arranged my recommenders about two months in advance.
I gave each a list of programs I was applying to, a description of each program, and the due date for their letters.
I also provided copies of my CV, transcript, and draft statements of purpose.</p>
<p>Whenever I nominated a recommender, I was asked whether I wanted to waive my <a href="https://www2.ed.gov/ferpa/">FERPA</a> right to view their letter upon admission.
I always waived.
I wasn’t concerned that my recommenders would change what they wrote if they knew I could read their letters.
Instead, I was concerned that admissions committees would observe that I chose not to waive access, assume that my recommenders responded by providing stronger-than-truthful recommendations, and subsequently discount the quality of those recommendations.</p>
<h3 id="statements-of-purpose">Statements of purpose</h3>
<p>All programs asked for a statement describing my preparation for graduate study, my research experience and interests, and my career goals.
<a href="statement.pdf">The statement I submitted to Stanford</a> contained</p>
<ul>
<li>a brief introduction,</li>
<li>a paragraph describing my educational background,</li>
<li>five paragraphs describing my research experience,</li>
<li>a paragraph stating my research interests, and</li>
<li>a paragraph stating my career goals.</li>
</ul>
<p>I focused on my research experience because I felt that it was my comparative advantage over other applicants, whom I assumed were well-trained technically and had more prestigious alma maters.</p>
<h3 id="writing-samples">Writing samples</h3>
<p>Most programs asked for a writing sample.
Some programs required at least 15 pages; some required at most 10 pages.
In both cases, I used an excerpt from my most recent journal submission.
For long samples, I excluded figures and tables, which happened to leave 15 pages.
For short samples, I included only the first eight pages, which contained the introduction, literature review, method, and data sections.
I always included a cover page describing the excerpt and stating the full paper’s abstract.</p>
<p>I could have submitted my honours thesis, which analysed a theoretical model of insurance and saving.
However, I felt that my academic transcript signalled my technical skills adequately.
Instead, I wanted my writing sample to demonstrate skills not demonstrated by other application materials: identifying interesting and important research questions, and synthesising literature.</p>
<h3 id="diversity-statements">Diversity statements</h3>
<p>Stanford and Yale asked me to explain how I would contribute to diversity on campus.
My response to Stanford read as follows:</p>
<blockquote>
<p>I grew up in Wakefield, a small rural town in New Zealand.
I have been fortunate to attend university, to discover my passion for research, and to collaborate on research projects with economists from Europe and North America.
These projects have benefited from the diverse ideas and experiences of my collaborators, which have increased the quality of our work.</p>
<p>I am excited to continue engaging with ideas in an inclusive research environment as a graduate student at Stanford.
I am also excited to share my cultural experiences in New Zealand with my Stanford classmates, and to learn about their experiences in other countries.
Doing so will increase our understanding of how different cultural values shape economic and social outcomes.
This understanding will enhance our ability to conduct globally relevant economic research that considers a range of perspectives.</p>
</blockquote>
<h2 id="after-applying">After applying</h2>
<p>Clicking “submit” on the online application forms began the long—about three month—<a href="#waiting-for-responses">wait for responses</a>.
In two cases, those responses were invitations for <a href="#interviews">interviews</a>; in most cases, they were <a href="#admissions-decisions">admissions decisions</a>.</p>
<h3 id="waiting-for-responses">Waiting for responses</h3>
<p>On waiting for responses, I offer three pieces of advice.</p>
<p>First, <em>take a break</em>.
Applying to PhD programs takes many years of effort earning a degree, gaining research experience, building relationships with recommendation letter writers, completing the GRE, and preparing your applications.
Make time to acknowledge and celebrate that effort.</p>
<p>Second, realise that there is nothing you can do (except, if invited, prepare for interviews) to change your admissions decisions.
Worrying is futile.
Instead, try to find fun and engaging ways to spend your time that take your mind off your applications.
I ran a lot and worked on some blog posts.</p>
<p>Third, try to stay off <a href="https://www.urch.com/forums/phd-economics/">Urch</a> and <a href="https://www.thegradcafe.com">TheGradCafe</a>.
In late January, people will start using those fora to share their anxiety and admissions results.
You will, after months of waiting, be hungry for news.
However, if you’re going to get good news then you will receive it from the program first.
Programs generally send all acceptances at the same time (or, at least, on the same day).
Thus, online fora can only deliver bad news: others received acceptance notifications but you did not.</p>
<h3 id="interviews">Interviews</h3>
<p>As far as I know, only business schools conduct interviews.
I interviewed for the business programs at Harvard and MIT, in late January and early February.
Both interviews comprised discussing my research experience and interests, and why those interests are best pursued at a business school.
The interviews lasted about fifteen minutes each and took place over Zoom.</p>
<h3 id="admissions-decisions">Admissions decisions</h3>
<p>Most programs sent admissions decisions in late February or early March.
They were either acceptances, rejections, or being placed on a wait list.
The program for which I was wait-listed was weaker than my best offer at the time, so I declined them promptly to help the market clear.</p>
<h2 id="further-reading">Further reading</h2>
<p>See <a href="https://www.reddit.com/r/Economics/wiki/career_undergrad_links">here</a> for more resources on economics PhD admissions.
I found Susan Athey’s <a href="https://athey.people.stanford.edu/professional-advice">professional advice</a>, Chris Blattman’s <a href="https://chrisblattman.com/about/contact/gradschool/">FAQs on PhD applications</a>, and Abhishek Nagaraj’s <a href="http://abhishekn.com/files/phdguide.pdf">guide to business PhD applications</a> particularly helpful.</p>
Estimating research field similarities
https://bldavies.com/blog/estimating-research-field-similarities/
Sat, 30 May 2020 00:00:00 +0000https://bldavies.com/blog/estimating-research-field-similarities/<p>Research often draws on multiple fields, each contributing field-specific ideas and techniques to the production of new knowledge.
The more similar are two fields, the easier it is to combine their ideas and techniques, the more frequently such combination occurs, and the more demand there is for ways to publish the consequent research.
Likewise, the more similar are two fields, the easier it is to attract (subscription fee-paying) readers to journals covering those fields, and so the more willing publishers are to supply such journals.
Thus, in equilibrium, the frequency with which journals cover pairs of research fields rises with the similarity between those fields.</p>
<p>This argument suggests that we can estimate research field similarities from data on journals and the fields they cover.
One source of such data is the <a href="https://www.scopus.com/home.uri">Scopus</a> source list, which matches journals to fields within Scopus’ <a href="https://service.elsevier.com/app/answers/detail/a_id/15181/supporthub/scopus/">All Science Journal Classification (ASJC)</a> system.
The Scopus source list covers 24,039 active journals, each assigned to one or more of 26 ASJC fields.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>
Each of these fields belongs to one of four subject areas: Health, Life, Physical, and Social Sciences.
The bar chart below presents the distribution of journals across fields, with bars coloured by subject area.<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup></p>
<p><img src="figures/counts-1.svg" alt=""></p>
<p>I estimate the similarity between ASJC fields as follows.
First, I count the number of journals assigned to each pair of fields.
I then divide these co-assignment counts by the number of journals assigned to at least one of the paired fields.
This normalisation delivers the <a href="https://en.wikipedia.org/wiki/Jaccard_index">Jaccard similarities</a> between the sets of journals assigned to each field.</p>
<p>On average, each ASJC field pair shares 62.43 co-assignments and a Jaccard similarity of 0.02.
About 83% of pairs share at least one journal co-assignment.
The table below presents the ten field pairs with the greatest Jaccard similarities.</p>
<table>
<thead>
<tr>
<th align="center">Field 1</th>
<th align="center">Field 2</th>
<th align="center">Co-assignments</th>
<th align="center">Jaccard similarity</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center">Arts and Humanities</td>
<td align="center">Social Sciences</td>
<td align="center">2,247</td>
<td align="center">0.29</td>
</tr>
<tr>
<td align="center">Materials Science</td>
<td align="center">Physics and Astronomy</td>
<td align="center">403</td>
<td align="center">0.23</td>
</tr>
<tr>
<td align="center">Chemical Engineering</td>
<td align="center">Chemistry</td>
<td align="center">222</td>
<td align="center">0.19</td>
</tr>
<tr>
<td align="center">Engineering</td>
<td align="center">Materials Science</td>
<td align="center">564</td>
<td align="center">0.18</td>
</tr>
<tr>
<td align="center">Business, Management and Accounting</td>
<td align="center">Economics, Econometrics and Finance</td>
<td align="center">359</td>
<td align="center">0.18</td>
</tr>
<tr>
<td align="center">Agricultural and Biological Sciences</td>
<td align="center">Environmental Science</td>
<td align="center">459</td>
<td align="center">0.16</td>
</tr>
<tr>
<td align="center">Computer Science</td>
<td align="center">Mathematics</td>
<td align="center">395</td>
<td align="center">0.15</td>
</tr>
<tr>
<td align="center">Chemistry</td>
<td align="center">Materials Science</td>
<td align="center">252</td>
<td align="center">0.15</td>
</tr>
<tr>
<td align="center">Computer Science</td>
<td align="center">Engineering</td>
<td align="center">492</td>
<td align="center">0.14</td>
</tr>
<tr>
<td align="center">Engineering</td>
<td align="center">Physics and Astronomy</td>
<td align="center">383</td>
<td align="center">0.12</td>
</tr>
</tbody>
</table>
<p>We can visualise the similarities between ASJC fields by constructing a network in which (i) nodes represent fields and (ii) edges have weights proportional to incident nodes’ similarities.
I present this network below, restricting my visualisation to the sub-network induced by the 50 edges of largest weight.
To improve readability, I label some nodes using the field abbreviations given in parentheses in the bar chart above.
I draw fields with greater similarities closer together.</p>
<p><img src="figures/map-1.svg" alt=""></p>
<p>Overall, fields tend to be most similar to other fields in the same subject area.
The proximities among nodes, reflecting fields’ pairwise similarities, seem intuitive:
Chemistry (Chem) and Chemical Engineering (ChemEng) are obviously similar, the biological sciences are clustered together, and Astronomy researchers probably don’t read many Nursing journals—indeed, there are no journal co-assignments between Physics and Astronomy (PhysAstr) and Nursing.</p>
<p>The paths between fields also make sense.
For example, Social Science (SocSci) relies on Neuroscience (Neur) to the extent it helps explain how people think and behave, which suggests the fields should be connected via Psychology (Psyc).
Likewise, Business, Management and Accounting (BusMgtAcc) rely on Mathematics (Math) to the extent that it helps model how people make decisions, which suggests that the fields should be connected via Decision Science (DscnSci).</p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>I exclude the 27th field, “Multidisciplinary,” from my analysis. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>I count journals “fractionally” so that, for example, journals assigned to four fields contribute a quarter to each field’s count. <a href="#fnref:2" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Policymaking under uncertainty
https://bldavies.com/blog/policymaking-under-uncertainty/
Sun, 17 May 2020 00:00:00 +0000https://bldavies.com/blog/policymaking-under-uncertainty/<p><a href="https://en.wikipedia.org/wiki/COVID-19_pandemic">The COVID-19 pandemic</a> exposes policymakers to several sources of uncertainty:</p>
<ul>
<li>How dangerous is the virus?</li>
<li>What will be the consequences of policies introduced to combat the virus?</li>
<li>How willing and able are people to tolerate those consequences?</li>
</ul>
<p>Under this uncertainty, policymakers must determine which policies to introduce and <a href="https://bldavies.com/blog/covid-19-lockdown-two-sided-uncertainty/">when to introduce them</a>.
On the one hand, acting early may prevent the virus from spreading uncontrollably; on the other, delaying action allows policymakers to collect more information about the virus and the policies best suited to combat it.</p>
<p>This post compares two approaches to policymaking under uncertainty: “commit early,” and “wait and learn.”
These approaches highlight the trade-off between acting decisively and <a href="https://bldavies.com/blog/option-value-waiting/">waiting for more information</a>.
I discuss this trade-off in the context of the New Zealand (NZ) government’s response to COVID-19.
However, my discussion also applies in other (non-NZ and non-pandemic) contexts.</p>
<p>To frame my discussion, consider a policymaker (PM) in a world with three time periods.
Each period, the world moves into one of many “states” that partition the set of possible futures (e.g., “recession” and “no recession”).
Moving between the first and second periods provides more, but not complete, information about the probability distribution of period three states.
The PM can influence this distribution by implementing policies in the first or second period.</p>
<p>Under the “commit early” approach, the PM implements policies in the first period based solely on information available in that period.
To the extent that these policies are expensive and irreversible, their implementation signals the PM’s confidence in the policies’ necessity and efficacy.
This signal can help prevent public dissent.
For example, NZ’s relatively early transition to <a href="https://covid19.govt.nz/alert-system/covid-19-alert-system/">COVID-19 Alert Level</a> 4 signalled our government’s strong belief that the cost of letting the virus spread outweighed the cost of entering a nationwide lockdown.
This signal probably made New Zealanders more willing to tolerate the consequences of staying home—they were told, in no uncertain terms, that doing so would “save lives.”</p>
<p>Committing early also provides information about future conditions to households and firms, who may be less informed than the PM about the distribution of future states.
For example, one of the NZ government’s earliest actions during the COVID-19 pandemic was to implement a <a href="https://www.employment.govt.nz/leave-and-holidays/other-types-of-leave/coronavirus-workplace/wage-subsidy/">wage subsidy scheme</a> that provided 12 weeks of financial support to firms and their employees.
The scheme signalled that our government expected the economic downturn caused by the virus to last at least 12 weeks.
This signal allowed households and firms to calibrate their expectations about, and adjust their behaviour to prepare for, the future.</p>
<p>One problem with the “commit early” approach is that the PM may commit to policies that appear optimal <em>ex ante</em> but turn out to be sub-optimal <em>ex post</em>.
As time passes, the PM gains more information about the distribution of future states and about which policies are most likely to deliver favourable states.
Consequently, the PM may end up regretting committing to, and paying for, policies they wouldn’t have chosen if they had waited for more information.</p>
<p>This regret can be avoided by adopting a “wait and learn” approach.
Under this approach, the PM delays implementing policies until more information about their relative merits arrives in the second period.
This delay allows the PM to preserve the <a href="https://en.wikipedia.org/wiki/Real_options_valuation">real options</a> that would otherwise be destroyed by implementing irreversible policies in the first period.
For example, delaying wage subsidy payouts until the economic impacts of COVID-19 were clearer may have allowed the NZ government to avoid <a href="https://www.stuff.co.nz/business/121254612/coronavirus-business-owner-pockets-150000-from-government-wage-subsidy-and-hes-not-paying-it-back">giving money to businesses that didn’t need it</a>.</p>
<p>However, delaying policy decisions may also delay decisions made by households and firms, who rely on policies as coordination devices.
For example, delaying the decision to allow inter-regional transport may delay freight companies’ decisions to schedule inter-regional shipments, which, in turn, delays production decisions by firms with inter-regional supply chains.
These supply-side delays may induce undesirable demand-side responses, such as <a href="https://www.rnz.co.nz/news/national/412425/supermarkets-urge-people-to-stop-panic-buying">“panic buying” food and homeware</a>.
The more peoples’ decisions depend on others’ decisions, the more the PM’s initial delay spreads throughout the economy and the more disruptive that delay becomes.</p>
<p>Delaying policy decisions also allows the costs of indecision (e.g., deaths from uncontrolled exposure to COVID-19) to accumulate.
The PM must trade these costs off with the benefits of waiting for more information.
These benefits may appear large <em>ex ante</em> but turn out to be small <em>ex post</em>.
Moving into the second period may not change the PM’s preferences over policy options if the new information merely confirms what was already known in the first period.
This confirmation may make the PM believe that they accrued the costs of delaying action unnecessarily.
Thus, whereas the “commit early” approach induces regret when initial estimates turn out to be wrong, the “wait and learn” approach induces regret when initial estimates turn out to be right.</p>
<p>Although committing early destroys some real options, it may create other real options in their place.
For example, NZ’s early lockdown likely prevented the virus from overburdening our healthcare providers, giving them time to plan and prepare for a range of future COVID-19 scenarios.
Similarly, our government’s <a href="https://www.beehive.govt.nz/release/govt-backs-rbnz-move-support-economy-lower-interest-rates">agreement to let the Reserve Bank buy government bonds</a> gave the Bank more flexibility to provide monetary stimulus if the need arises.
In both cases, taking early, decisive action opened paths that may have been unavailable if our government had chosen to wait and learn.</p>
<p>Ultimately, the PM should adopt the “commit early” approach whenever the net benefits of acting early exceed the net benefits of waiting.
However, valuing these net benefits requires</p>
<ol>
<li>estimating the likelihood and quality of future states,</li>
<li>choosing a (politically defensible) rate at which to discount future payoffs, and</li>
<li>valuing changes in net optionality.</li>
</ol>
<p>These three requirements raise the complexity of the PM’s choice problem and the cognitive cost of finding its solution.
Such <a href="https://en.wikipedia.org/wiki/Bounded_rationality">bounds</a> on the PM’s rationality may lead to sub-optimal policy decisions, both <em>ex ante</em> and <em>ex post</em>.</p>
<hr>
<p><em>Thanks to my dad for inspiring this post and to <a href="https://motu.nz/about-us/people/arthur-grimes/">Arthur Grimes</a> for his comments.</em></p>
Transitivity in positive correlations
https://bldavies.com/blog/transitivity-positive-correlations/
Fri, 24 Apr 2020 00:00:00 +0000https://bldavies.com/blog/transitivity-positive-correlations/<p>Let <code>\(X\)</code>, <code>\(Y\)</code> and <code>\(Z\)</code> be random variables.
Suppose that both <code>\(X\)</code> and <code>\(Y\)</code> are positively correlated with <code>\(Z\)</code>.
Are <code>\(X\)</code> and <code>\(Y\)</code> positively correlated?</p>
<p>The answer to this question is “not necessarily.”
To see why, let <code>\(\rho\in[-1,1]\)</code> be a constant, and define the random variables
<code>$$X=\rho Z+W \tag{1}$$</code>
and
<code>$$Y=\rho Z-W \tag{2}$$</code>
with <code>\(Z\sim N(0,1)\)</code> and <code>\(W\sim N(0,1-\rho^2)\)</code>.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>
Then <code>\(W\)</code>, <code>\(X\)</code>, <code>\(Y\)</code> and <code>\(Z\)</code> have zero means, while <code>\(X\)</code>, <code>\(Y\)</code> and <code>\(Z\)</code> have unit variances.
It follows that
<code>$$\begin{align} \newcommand{\E}{\mathrm{E}} \newcommand{\Corr}{\mathrm{Corr}} \newcommand{\Cov}{\mathrm{Cov}} \newcommand{\Var}{\mathrm{Var}} \Corr(X,Y) &= \frac{\Cov(X,Y)}{\sqrt{\Var(X)}\sqrt{\Var(Y)}} \\ &= \Cov(X,Y) \\ &= \E[XY]-\E[X]\E[Y] \\ &= \E[XY], \end{align}$$</code>
and similarly <code>\(\Corr(X,Z)=\E[XZ]\)</code> and <code>\(\Corr(Y,Z)=\E[YZ]\)</code>.
Now
<code>$$\begin{align} \E[XZ] &= \E[(\rho Z+W)Z] \\ &= \rho\E[Z^2]+\E[WZ] \\ &= \rho\Var(Z)+\rho\E[Z]^2+\Cov(W,Z)+\E[W]\E[Y] \\ &= \rho \end{align}$$</code>
because <code>\(W\)</code> and <code>\(Z\)</code> are independent.
A similar argument yields <code>\(\E[YZ]=\rho\)</code>.
Finally, substituting <code>\((1)\)</code> into <code>\((2)\)</code> so as to eliminate <code>\(W\)</code> gives
<code>$$Y=2\rho Z-X,$$</code>
from which we obtain
<code>$$\begin{align} \Corr(X,Y) &= \E[XY] \\ &= \E[X(2\rho Z-X)] \\ &= 2\rho\E[XZ]-\E[X^2] \\ &= 2\rho\E[XZ]-\Var(X)+\E[X]^2 \\ &= 2\rho^2-1. \end{align}$$</code>
Thus, if <code>\(\rho\in(0,1/\sqrt{2})\)</code> then <code>\(X\)</code> and <code>\(Y\)</code> share a negative correlation even though both are correlated positively with <code>\(Z\)</code>.
Intuitively, if <code>\(\rho\)</code> is sufficiently small then the negative correlation between the error terms <code>\(W\)</code> and <code>\(-W\)</code> dominates the positive correlations between <code>\(X\)</code> and <code>\(Z\)</code>, and <code>\(Y\)</code> and <code>\(Z\)</code>.</p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>Here <code>\(N(\mu,\sigma^2)\)</code> denotes the normal distribution with mean <code>\(\mu\)</code> and variance <code>\(\sigma^2\)</code>. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Greedy Pig strategies
https://bldavies.com/blog/greedy-pig-strategies/
Sat, 18 Apr 2020 00:00:00 +0000https://bldavies.com/blog/greedy-pig-strategies/<p><a href="https://nzmaths.co.nz/resource/greedy-pig-1">Greedy Pig</a> is a game used to teach probability and statistics to primary school students.
The game comprises a set of rounds in which players roll a fair six-sided die until they choose to stop, in which case their score for that round is equal to the sum of rolled values, or they roll a one, in which case their score for that round is zero.
The player with the highest total score across all rounds wins.</p>
<p>Since the scores obtained in each round are independent, players can maximise their total score across all rounds by maximising their score in each round independently.
One strategy is to commit to rolling the die <code>\(n\)</code> times in each round, where <code>\(n\)</code> is chosen to maximise the expected resulting score.
We can make this choice as follows.
First, let <code>\(X_k\)</code> be the outcome of the <code>\(k^\text{th}\)</code> die roll.
This outcome has probability distribution
<code>$$\Pr(X_k=x)=\begin{cases}1/6&\text{if}\ x\in\{1,2,3,4,5,6\}\\ 0&\text{otherwise}.\end{cases}$$</code>
Next, let <code>\(1_{X_k>1}\)</code> be the indicator variable for the event in which <code>\(X_k>1\)</code>.
Then the score after <code>\(n\)</code> rolls is given by
<code>$$S_n=\sum_{k=1}^nX_k\pi_n,$$</code>
where
<code>$$\pi_n=\prod_{k=1}^n1_{X_k>1}$$</code>
is the indicator variable for the event in which all of the first <code>\(n\)</code> rolls exceed unity.
Now, by the linearity of the expectation operator and the <a href="https://en.wikipedia.org/wiki/Law_of_total_expectation">law of total expectation</a>, the score <code>\(S_n\)</code> has expected value
<code>$$\begin{align} \mathrm{E}[S_n] &= \sum_{k=1}^n\mathrm{E}[X_k\pi_n] \\ &= \sum_{k=1}^n\left(\mathrm{E}[X_k\pi_n\,\vert\,\pi_n=1]\Pr(\pi_n=1)+\mathrm{E}[X_k\pi_n\,\vert\,\pi_n=0]\Pr(\pi_n=0)\right) \\ &= \sum_{k=1}^n\mathrm{E}[X_k\,\vert\,\pi_n=1]\Pr(\pi_n=1). \end{align}$$</code>
Since <code>\(\pi_n=1\)</code> if and only if <code>\(X_k>1\)</code> for each <code>\(k\in\{1,2,\ldots,n\}\)</code>, and since die rolls are independent, we have
<code>$$\begin{align} \mathrm{E}[X_k\,\vert\,\pi_n=1] &= \mathrm{E}[X_k\,\vert\,X_k>1] \\ &= \frac{2+3+4+5+6}{5} \\ &= 4 \end{align}$$</code>
and
<code>$$\Pr(\pi_n=1)=\left(\frac{5}{6}\right)^n.$$</code>
Hence
<code>$$\mathrm{E}[S_n]=4n\left(\frac{5}{6}\right)^n$$</code>
for each <code>\(n\)</code>, which obtains its maximum value of 8.04 when <code>\(n\in\{5,6\}\)</code>.
Therefore, players who commit to a fixed number of rolls should commit to five or six rolls in each round to maximise their expected score.</p>
<p>Another strategy is to continue rolling until reaching some target score <code>\(S^*\)</code>.
This strategy allows players to respond to their realised sequence of rolls.
Intuitively, players who realise a run of high-value rolls have more to lose by rolling again and so may be less willing to do so.
To determine the value of <code>\(S^*\)</code>, let <code>\(Y_k\)</code> denote the payoff from the <code>\(k^\text{th}\)</code> roll and notice that this payoff has expected value
<code>$$\begin{align} \mathrm{E}[Y_k] &= \mathrm{E}[X_k\,\vert\,X_k>1]\Pr(X_k>1)-S_{k-1}\Pr(X_k=1) \\ &= \frac{20-S_{k-1}}{6}. \end{align}$$</code>
Thus, rolling again delivers a positive expected payoff if and only if <code>\(S_{k-1}<20\)</code>, and so players seeking to maximise their expected score should stop rolling when their score reaches <code>\(S^*=20\)</code>.
This argument also clarifies why both <code>\(n=5\)</code> and <code>\(n=6\)</code> maximise <code>\(\mathrm{E}[S_n]\)</code>: players with a non-zero score after five rolls have a conditional expected score of <code>\(\mathrm{E}[S_5\,\vert\,\pi_5=1]=20\)</code>, so the expected gain in score for such players from a sixth roll is zero.</p>
<p>We can compare the “roll five times” and “stop at 20” strategies via simulation.
First, define a function <code>simulate_strategy</code> that takes as arguments either a fixed number of rolls <code>n</code> or a target score <code>t</code>, and simulates the player’s score from adopting their chosen strategy:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">simulate_strategy</span> <span class="o"><-</span> <span class="nf">function</span><span class="p">(</span><span class="n">n</span> <span class="o">=</span> <span class="kc">NULL</span><span class="p">,</span> <span class="n">t</span> <span class="o">=</span> <span class="kc">NULL</span><span class="p">)</span> <span class="p">{</span>
<span class="nf">if </span><span class="p">(</span><span class="nf">is.null</span><span class="p">(</span><span class="n">n</span><span class="p">)</span> <span class="o">&</span> <span class="nf">is.null</span><span class="p">(</span><span class="n">t</span><span class="p">))</span> <span class="nf">stop</span><span class="p">(</span><span class="s">'`n` or `t` must be non-NULL'</span><span class="p">)</span>
<span class="n">score</span> <span class="o"><-</span> <span class="m">0</span>
<span class="n">k</span> <span class="o"><-</span> <span class="m">0</span>
<span class="n">done</span> <span class="o"><-</span> <span class="bp">F</span>
<span class="nf">while </span><span class="p">(</span><span class="o">!</span><span class="n">done</span><span class="p">)</span> <span class="p">{</span>
<span class="n">x</span> <span class="o"><-</span> <span class="nf">sample</span><span class="p">(</span><span class="m">1</span><span class="o">:</span><span class="m">6</span><span class="p">,</span> <span class="m">1</span><span class="p">)</span>
<span class="nf">if </span><span class="p">(</span><span class="n">x</span> <span class="o">==</span> <span class="m">1</span><span class="p">)</span> <span class="p">{</span>
<span class="n">score</span> <span class="o"><-</span> <span class="m">0</span>
<span class="n">done</span> <span class="o"><-</span> <span class="bp">T</span>
<span class="p">}</span> <span class="n">else</span> <span class="p">{</span>
<span class="n">score</span> <span class="o"><-</span> <span class="n">score</span> <span class="o">+</span> <span class="n">x</span>
<span class="n">k</span> <span class="o"><-</span> <span class="n">k</span> <span class="o">+</span> <span class="m">1</span>
<span class="n">done</span> <span class="o"><-</span> <span class="nf">ifelse</span><span class="p">(</span><span class="o">!</span><span class="nf">is.null</span><span class="p">(</span><span class="n">n</span><span class="p">),</span> <span class="n">k</span> <span class="o">>=</span> <span class="n">n</span><span class="p">,</span> <span class="n">score</span> <span class="o">>=</span> <span class="n">t</span><span class="p">)</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="n">score</span>
<span class="p">}</span>
</code></pre></div><p>Next, define two wrapper functions for simulating each strategy separately:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">simulate_n</span> <span class="o"><-</span> <span class="nf">function</span><span class="p">(</span><span class="n">n</span><span class="p">)</span> <span class="nf">simulate_strategy</span><span class="p">(</span><span class="n">n</span> <span class="o">=</span> <span class="n">n</span><span class="p">)</span>
<span class="n">simulate_t</span> <span class="o"><-</span> <span class="nf">function</span><span class="p">(</span><span class="n">t</span><span class="p">)</span> <span class="nf">simulate_strategy</span><span class="p">(</span><span class="n">t</span> <span class="o">=</span> <span class="n">t</span><span class="p">)</span>
</code></pre></div><p>Finally, we can simulate 10,000 games using each strategy and store the realised scores:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">set.seed</span><span class="p">(</span><span class="m">0</span><span class="p">)</span>
<span class="n">scores_n</span> <span class="o"><-</span> <span class="nf">sapply</span><span class="p">(</span><span class="nf">rep</span><span class="p">(</span><span class="m">5</span><span class="p">,</span> <span class="m">1e4</span><span class="p">),</span> <span class="n">simulate_n</span><span class="p">)</span>
<span class="n">scores_t</span> <span class="o"><-</span> <span class="nf">sapply</span><span class="p">(</span><span class="nf">rep</span><span class="p">(</span><span class="m">20</span><span class="p">,</span> <span class="m">1e4</span><span class="p">),</span> <span class="n">simulate_t</span><span class="p">)</span>
</code></pre></div><p>The “stop at 20” strategy delivers a mean score of 8.13, which is 1.69% higher than the mean score delivered by the “roll five times” strategy.
However, the “stop at 20” strategy is also 4.12% more likely to deliver a score of zero than the “roll five times” strategy.
We can see this by plotting the distributions of simulated scores delivered by the two strategies:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">dplyr</span><span class="p">)</span>
<span class="nf">library</span><span class="p">(</span><span class="n">ggplot2</span><span class="p">)</span>
<span class="nf">library</span><span class="p">(</span><span class="n">tidyr</span><span class="p">)</span>
<span class="nf">tibble</span><span class="p">(</span><span class="n">`Roll five times`</span> <span class="o">=</span> <span class="n">scores_n</span><span class="p">,</span> <span class="n">`Stop at 20`</span> <span class="o">=</span> <span class="n">scores_t</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">gather</span><span class="p">(</span><span class="n">Strategy</span><span class="p">,</span> <span class="n">Score</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">count</span><span class="p">(</span><span class="n">Strategy</span><span class="p">,</span> <span class="n">Score</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">ggplot</span><span class="p">(</span><span class="nf">aes</span><span class="p">(</span><span class="n">Score</span><span class="p">,</span> <span class="n">n</span> <span class="o">/</span> <span class="m">1e4</span><span class="p">))</span> <span class="o">+</span>
<span class="nf">geom_col</span><span class="p">(</span><span class="nf">aes</span><span class="p">(</span><span class="n">fill</span> <span class="o">=</span> <span class="n">Strategy</span><span class="p">),</span> <span class="n">alpha</span> <span class="o">=</span> <span class="m">0.75</span><span class="p">,</span> <span class="n">position</span> <span class="o">=</span> <span class="s">'dodge'</span><span class="p">)</span> <span class="o">+</span>
<span class="nf">labs</span><span class="p">(</span><span class="n">y</span> <span class="o">=</span> <span class="s">'Relative frequency'</span><span class="p">,</span>
<span class="n">title</span> <span class="o">=</span> <span class="s">'Comparing Greedy Pig strategies'</span><span class="p">,</span>
<span class="n">subtitle</span> <span class="o">=</span> <span class="s">'Distribution of scores across 10,000 simulated games'</span><span class="p">)</span>
</code></pre></div><p><img src="figures/distributions-1.svg" alt=""></p>
<p>The distribution of non-zero scores under the “stop at 20” strategy is asymmetric about its conditional mean of 21.69, and is bounded below by 20 and above by 25.
In contrast, the distribution of non-zero scores under the “roll five times” strategy is symmetric about its conditional mean of 20, and is bounded below by 10 and above by 30.</p>
<p>The “roll five times” and “stop at 20” strategies are heuristics for maximising players’ scores in each round.
These heuristics may be sub-optimal in some situations.
For example, if one player remains in the last round and has accumulated enough total score to win the game then they should always stop rolling.</p>
Generating random graphs with communities
https://bldavies.com/blog/generating-random-graphs-communities/
Tue, 07 Apr 2020 00:00:00 +0000https://bldavies.com/blog/generating-random-graphs-communities/<p>Suppose I want to generate some random graphs that exhibit <a href="https://en.wikipedia.org/wiki/Community_structure">community structure</a>.
For example, I might be interested in simulating how information or diseases spread in social networks, and I suspect—but lack data to confirm—that people sort into communities based on their personal and professional interests.</p>
<p>One approach is to use the <a href="https://en.wikipedia.org/wiki/Stochastic_block_model">stochastic block model</a>.
In this model, each vertex belongs to one of <code>\(r\)</code> disjoint communities <code>\(C_1,C_2,\ldots,C_r\)</code>, and vertices <code>\(u\in C_i\)</code> and <code>\(v\in C_j\)</code> are adjacent with probability <code>\(p_{ij}\)</code>.
Varying <code>\(p_{ij}\)</code> across <code>\((i,j)\)</code> pairs varies the level of connectivity within and between communities.
For example, choosing
<code>$$p_{ij}=\begin{cases} p & \text{if}\ i=j \\ q & \text{otherwise} \end{cases}$$</code>
for some probabilities <code>\(p\)</code> and <code>\(q<p\)</code> delivers random graphs that tend to contain more edges within communities than between communities.
We can simulate this special case—known as the “planted partition model” (PPM)—in R using the <code>sample_ppm</code> function defined below.</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">igraph</span><span class="p">)</span>
<span class="n">sample_ppm</span> <span class="o"><-</span> <span class="nf">function</span><span class="p">(</span><span class="n">memb</span><span class="p">,</span> <span class="n">p</span><span class="p">,</span> <span class="n">q</span><span class="p">)</span> <span class="p">{</span>
<span class="n">mat</span> <span class="o"><-</span> <span class="nf">t</span><span class="p">(</span><span class="nf">combn</span><span class="p">(</span><span class="nf">seq_along</span><span class="p">(</span><span class="n">memb</span><span class="p">),</span> <span class="m">2</span><span class="p">))</span>
<span class="n">prob</span> <span class="o"><-</span> <span class="nf">c</span><span class="p">(</span><span class="n">q</span><span class="p">,</span> <span class="n">p</span><span class="p">)</span><span class="n">[1</span> <span class="o">+</span> <span class="p">(</span><span class="n">memb[mat[</span><span class="p">,</span> <span class="m">1</span><span class="n">]]</span> <span class="o">==</span> <span class="n">memb[mat[</span><span class="p">,</span> <span class="m">2</span><span class="n">]]</span><span class="p">)</span><span class="n">]</span>
<span class="n">el</span> <span class="o"><-</span> <span class="n">mat</span><span class="nf">[which</span><span class="p">(</span><span class="nf">runif</span><span class="p">(</span><span class="nf">nrow</span><span class="p">(</span><span class="n">mat</span><span class="p">))</span> <span class="o"><</span> <span class="n">prob</span><span class="p">),</span> <span class="n">]</span>
<span class="nf">graph_from_edgelist</span><span class="p">(</span><span class="n">el</span><span class="p">,</span> <span class="n">directed</span> <span class="o">=</span> <span class="kc">FALSE</span><span class="p">)</span>
<span class="p">}</span>
</code></pre></div><p><code>sample_ppm</code> takes as arguments a vector <code>memb</code> of community memberships, and the edge probabilities <code>\(p\)</code> and <code>\(q\)</code>.
The function constructs a matrix <code>mat</code> of vertex pairs, determines the probabilities that these pairs are adjacent, and uses these probabilities to create a random edge list <code>el</code> and corresponding random graph.</p>
<p>For example, let’s simulate a PPM random graph with 50 vertices and four communities, and with edge probabilities <code>\(p=1/3\)</code> and <code>\(q=0.01\)</code>:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">set.seed</span><span class="p">(</span><span class="m">0</span><span class="p">)</span>
<span class="n">memb</span> <span class="o"><-</span> <span class="nf">sample</span><span class="p">(</span><span class="m">1</span><span class="o">:</span><span class="m">4</span><span class="p">,</span> <span class="m">50</span><span class="p">,</span> <span class="n">replace</span> <span class="o">=</span> <span class="kc">TRUE</span><span class="p">)</span>
<span class="n">G</span> <span class="o"><-</span> <span class="nf">sample_ppm</span><span class="p">(</span><span class="n">memb</span><span class="p">,</span> <span class="m">1</span><span class="o">/</span><span class="m">3</span><span class="p">,</span> <span class="m">0.01</span><span class="p">)</span>
</code></pre></div><p>We can visualise <code>G</code> using <a href="https://cran.r-project.org/package=ggraph">ggraph</a>:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">ggraph</span><span class="p">)</span>
<span class="n">G</span> <span class="o">%>%</span>
<span class="nf">ggraph</span><span class="p">()</span> <span class="o">+</span>
<span class="nf">geom_edge_link0</span><span class="p">(</span><span class="n">colour</span> <span class="o">=</span> <span class="s">'grey75'</span><span class="p">)</span> <span class="o">+</span>
<span class="nf">geom_node_point</span><span class="p">(</span><span class="nf">aes</span><span class="p">(</span><span class="n">col</span> <span class="o">=</span> <span class="nf">factor</span><span class="p">(</span><span class="n">memb</span><span class="p">)),</span> <span class="n">show.legend</span> <span class="o">=</span> <span class="kc">FALSE</span><span class="p">)</span> <span class="o">+</span>
<span class="nf">scale_colour_brewer</span><span class="p">(</span><span class="n">palette</span> <span class="o">=</span> <span class="s">'Set1'</span><span class="p">)</span> <span class="o">+</span>
<span class="nf">theme_void</span><span class="p">()</span>
</code></pre></div><p><img src="figures/network-1.svg" alt=""></p>
<p>The communities in <code>G</code>—identified by vertices’ colours—contain many internal edges but few external edges.
Thus, if informed or infected vertices spread information or disease among their neighbours with equal probabilities, then we would expect faster diffusion within communities than between communities.</p>
Insurance and saving
https://bldavies.com/blog/insurance-saving/
Fri, 03 Apr 2020 00:00:00 +0000https://bldavies.com/blog/insurance-saving/<p>The seminal model of insurance demand (<a href="https://www.jstor.org/stable/1812044">Arrow, 1963</a>; <a href="https://www.jstor.org/stable/1830049">Mossin, 1968</a>) describes a consumer who chooses the level of coverage <code>\(I^*\)</code> that maximises their expected utility
<code>$$\phi(I)=(1-p)u(Y-\pi I)+pu(Y-\pi I-L+I),$$</code>
where
<code>\(p\)</code> is the probability of suffering a binary loss of fixed size <code>\(L\)</code>,
<code>\(Y\)</code> is the consumer’s riskless income,
<code>\(u\)</code> is their increasing and concave utility function,
and <code>\(\pi\)</code> is the per-unit price of insurance.
In this model, the consumer buys full insurance (i.e., chooses <code>\(I^*=L\)</code>) if and only if the premium is actuarially fair (i.e., if <code>\(\pi=p\)</code>), and their demand for insurance decreases with income if their <a href="https://en.wikipedia.org/wiki/Risk_aversion#Absolute_risk_aversion">absolute risk aversion</a> decreases with wealth.</p>
<p>A more realistic model would contain at least two periods:
one in which the consumer buys insurance and
one in which they might suffer an insurable loss.
However, in a two-period model, the consumer suffers a form of <a href="https://en.wikipedia.org/wiki/Incomplete_markets">market incompleteness</a>:
they can buy insurance to shift income into the future, but they cannot do the opposite nor vary their net income in the future no-loss state.</p>
<p>This market incompleteness can be resolved by allowing the consumer to save or borrow at the riskless interest rate.
Then they can save or borrow to smooth income across time, and buy insurance to smooth income across future states of nature.
In particular, they can choose the level of coverage <code>\(I^*\)</code> and savings commitment <code>\(S^*\)</code> that maximise their expected utility
<code>$$\begin{align} \psi(I,S) &= u(Y_1-\pi I-S) \\ &\quad+\delta[(1-p)u(Y_2+(1+R)S)+pu(Y_2+(1+R)S-L+I)], \end{align}$$</code>
where
<code>\(Y_1\)</code> and <code>\(Y_2\)</code> are the consumer’s riskless incomes in the first and second periods,
<code>\(\delta\in(0,1]\)</code> is their intertemporal discount factor, and
<code>\(R\)</code> is the riskless interest rate.
In this two-period model, the consumer buys full insurance if and only if
<code>$$\pi=\frac{p}{1+R},$$</code>
which is the two-period equivalent of the actuarially fair premium rate.
One can also show that
if the consumer cannot save then <code>\(I^*\)</code> is increasing in <code>\(Y_1\)</code> and decreasing in <code>\(Y_2\)</code>, but
if they can save then increases in <code>\(Y_1\)</code> and <code>\(Y_2\)</code> shift <code>\(I^*\)</code> in the same direction as they shift the consumer’s absolute risk aversion.
Intuitively, if the consumer cannot save and they want to shift income into the future then the only way to do so is to buy more insurance.
In contrast, if the consumer can save then they can use their savings commitment to smooth increases in income across time, and adjust their insurance demand according to whether such increases make them more or less absolute risk averse.</p>
Optimal training loads
https://bldavies.com/blog/optimal-training-loads/
Mon, 30 Mar 2020 00:00:00 +0000https://bldavies.com/blog/optimal-training-loads/<p>Suppose I’m training for an upcoming race.
I want to choose the training load that maximises my expected performance on race day.
The harder I train, the better my performance will be but the more likely I am to injure myself.
How should I balance this trade-off between better performance and greater risk of injury?</p>
<p>We can model this choice problem as follows.
Let <code>\(t\in[0,1]\)</code> represent my training load and <code>\(a\in\mathbb{R}\)</code> my natural ability.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>
My performance on race day is some function <code>\(f(t,a)\)</code> of <code>\(t\)</code> and <code>\(a\)</code>.
I assume that this function is increasing and concave in <code>\(t\)</code> (so that there are positive but diminishing returns to training), and increasing in <code>\(a\)</code>.</p>
<p>I can’t compete if I get injured, which occurs with some probability <code>\(p(t,r)\)</code> that depends on my training load and my natural resistance to injury <code>\(r\in\mathbb{R}\)</code>.
I assume that <code>\(p\)</code> is increasing and convex in <code>\(t\)</code> (so that training increases my likelihood of injury at an increasing rate), and decreasing in <code>\(r\)</code>.</p>
<p>My objective is to choose the training load <code>\(t^*\)</code> that maximises my expected performance<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>
<code>$$\psi(t)=(1-p(t,r))\,f(t,a).$$</code>
My assumptions on the shapes of <code>\(f\)</code> and <code>\(p\)</code> imply that <code>\(\psi\)</code> is concave in <code>\(t\)</code>.
Therefore, the unique optimal training load <code>\(t^*\)</code> satisfies the first-order condition (FOC)
<code>$$\begin{align} 0 &= \psi'(t^*) \\ &= -p_t(t^*,r)\,f(t^*,a)+(1-p(t^*,r))\,f_t(t^*,a), \end{align}$$</code>
where <code>\(\psi'\)</code> denotes the derivative of <code>\(\psi\)</code> with respect to <code>\(t\)</code>, and
where <code>\(p_t\)</code> and <code>\(f_t\)</code> denote the partial derivatives of <code>\(p\)</code> and <code>\(f\)</code> with respect to <code>\(t\)</code>.
The FOC can be rewritten as
<code>$$(1-p(t^*,r))\,f_t(t^*,a)=p_t(t^*,r)f(t^*,a),$$</code>
which shows that I should keep training until the marginal benefit of improved performance (the left-hand side) equals the marginal cost of injury becoming more probable (the right-hand side).</p>
<p>I can’t determine the value of <code>\(t^*\)</code> without further assumptions on <code>\(f\)</code> and <code>\(p\)</code>.
However, I can determine the relationship between <code>\(t^*\)</code> and the parameters <code>\(a\)</code> and <code>\(r\)</code>.
Since <code>\(\psi''(t)<0\)</code> for all feasible <code>\(t\)</code>, the <a href="https://en.wikipedia.org/wiki/Implicit_function_theorem">implicit function theorem</a> (IFT) implies that
<code>$$\mathrm{sign}\frac{\partial t^*}{\partial \theta}=\mathrm{sign}\frac{\partial \psi'(t^*)}{\partial \theta}$$</code>
for each element <code>\(\theta\)</code> of the symbol set <code>\(\{a,r\}\)</code>.
Now
<code>$$\frac{\partial \psi'(t^*)}{\partial a}=-p_t(t^*,r)\,f_a(t^*,a)+(1-p(t^*,r))\,f_{ta}(t^*,a),$$</code>
where <code>\(f_a\)</code> and <code>\(f_{ta}\)</code> denote the partial derivatives of <code>\(f\)</code> and <code>\(f_t\)</code> with respect to <code>\(a\)</code>, and
<code>$$\frac{\partial \psi'(t^*)}{\partial r}=-p_{tr}(t^*,r)\,f(t^*,a)-p_r(t^*,r)\,f_t(t^*,a),$$</code>
where <code>\(p_{tr}\)</code> and <code>\(p_r\)</code> denote the partial derivatives of <code>\(p_t\)</code> and <code>\(p\)</code> with respect to <code>\(r\)</code>.
By <a href="https://en.wikipedia.org/wiki/Symmetry_of_second_derivatives">Young’s theorem</a>, the mixed partials <code>\(f_{ta}\)</code> and <code>\(p_{tr}\)</code> satisfy
<code>$$f_{ta}(t,a)=\frac{\partial}{\partial t}\left(\frac{\partial f(t,a)}{\partial a}\right)$$</code>
and
<code>$$p_{tr}(t,r)=\frac{\partial}{\partial t}\left(\frac{\partial p(t,r)}{\partial r}\right)$$</code>
for all feasible <code>\(t\)</code>, <code>\(a\)</code> and <code>\(r\)</code>.
Thus, it seems reasonable to assume that <code>\(f_{ta}(t,a)\le0\)</code> and <code>\(p_{tr}(t,r)\le0\)</code>, which mean that training washes out the benefits of natural ability and resistance to injury.
These assumptions, together with the IFT, imply that <code>\(t^*\)</code> is decreasing in <code>\(a\)</code> and increasing in <code>\(r\)</code>—that is, I should train harder if I become less naturally able or more resistant to injury.</p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>For example, <code>\(t\)</code> could represent the proportion of time before the race that I spend training. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>I assume that <code>\(f\)</code> and <code>\(p\)</code> are twice continuously differentiable so that <code>\(\psi\)</code> is too. <a href="#fnref:2" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Voting along party lines
https://bldavies.com/blog/voting-along-party-lines/
Thu, 26 Mar 2020 00:00:00 +0000https://bldavies.com/blog/voting-along-party-lines/<p>Later this year, New Zealanders will vote in a referendum on whether to legalise voluntary euthanasia under the conditions specified in the <a href="http://www.legislation.govt.nz/bill/member/2017/0269/latest/DLM7285905.html">End of Life Choice Bill</a> (hereafter “the Bill”).
Members of Parliament (MPs) read the Bill three times, each time holding a <a href="https://en.wikipedia.org/wiki/Conscience_vote">conscience vote</a> on whether to progress the Bill towards becoming legislation.
The table below presents the percentage and fraction of MPs who voted in favour of the Bill, separated by political party and reading.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup></p>
<table>
<thead>
<tr>
<th align="left">Party</th>
<th align="center">First reading</th>
<th align="center">Second reading</th>
<th align="center">Third reading</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Act</td>
<td align="center">100% (1/1)</td>
<td align="center">100% (1/1)</td>
<td align="center">100% (1/1)</td>
</tr>
<tr>
<td align="left">Green</td>
<td align="center">100% (8/8)</td>
<td align="center">100% (8/8)</td>
<td align="center">100% (8/8)</td>
</tr>
<tr>
<td align="left">Independent</td>
<td align="center">100% (1/1)</td>
<td align="center">100% (1/1)</td>
<td align="center">100% (1/1)</td>
</tr>
<tr>
<td align="left">Labour</td>
<td align="center">80% (37/46)</td>
<td align="center">72% (33/46)</td>
<td align="center">72% (33/46)</td>
</tr>
<tr>
<td align="left">National</td>
<td align="center">36% (20/55)</td>
<td align="center">33% (18/55)</td>
<td align="center">29% (16/55)</td>
</tr>
<tr>
<td align="left">NZ First</td>
<td align="center">100% (9/9)</td>
<td align="center">100% (9/9)</td>
<td align="center">100% (9/9)</td>
</tr>
</tbody>
</table>
<p>Most MPs in the coalition government voted in favour, including all MPs from the Green Party and NZ First.
In the Bill’s final reading, 72% of Labour MPs followed party leader Jacinda Ardern’s vote in favour, while 71% of National MPs followed party leader Simon Bridges’ vote to oppose.
Overall, about a third of Labour and National MPs voted against their party lines.<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup></p>
<p>New Zealand uses a <a href="https://en.wikipedia.org/wiki/Mixed-member_proportional_representation">mixed member proportional</a> electoral system:
voters submit votes for a political party and for a representative of their local constituency.
Consequently, some “list” MPs enter parliament because they are ranked highly within a party that received many votes rather than because they were the preferred candidate among their local constituents.
The table below shows that Labour and National list MPs were more likely to vote along party lines than non-list MPs in the Bill’s third reading.</p>
<table>
<thead>
<tr>
<th align="left">Party</th>
<th align="center">List MP adherence</th>
<th align="center">Non-list MP adherence</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Labour</td>
<td align="center">88% (15/17)</td>
<td align="center">62% (18/29)</td>
</tr>
<tr>
<td align="left">National</td>
<td align="center">80% (12/15)</td>
<td align="center">68% (27/40)</td>
</tr>
</tbody>
</table>
<p>The difference in list and non-list MPs’ adherence to party lines has at least two explanations.
First, non-list MPs have non-party reasons to be in government—namely, to serve their local constituents—and so may accept weaker idealogical matches than list MPs when self-selecting into party affiliations.
This weaker matching would reduce the idealogical polarisation and inertia among non-list MPs relative to list MPs.
Indeed, all of the MPs who changed their votes between the Bill’s first and third readings were non-list MPs.</p>
<p>Second, list MPs have stronger incentives to signal loyalty to their party because they cannot rely on support from local constituents to get elected.
If list MPs consistently oppose their leaders then they may be demoted within their parties and, consequently, become less likely to re-enter parliament at the next election.
Thus, to the extent that MPs want to maximise their chances of re-election, list MPs may be more willing than non-list MPs to ignore their conscience and vote along party lines.</p>
<p>It would be interesting to separate the idealogical sorting and signalling motives that drive greater adherence among list MPs.
One strategy could be to track individual MPs across votes and governments, and analyse whether their propensity to vote along party lines is greater when they are list MPs than when they are non-list MPs.
However, I can’t find any up-to-date vote data online and don’t particularly want to create them by trawling through decades worth of <a href="https://www.parliament.nz/en/pb/hansard-debates/">Hansard</a> documents.<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>
Perhaps one of my readers is up for the challenge?</p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>The data used in this post are available <a href="https://github.com/bldavies/eolc-bill/">here</a>. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>This overlap in preferences among Labour and National MPs reflects the idealogical overlap between the two parties at the centre of the political spectrum. <a href="#fnref:2" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:3" role="doc-endnote">
<p>There was an <a href="https://web.archive.org/web/20190911021215/http://votes.wotfun.com/">online database</a> of conscience votes among New Zealand MPs, but the database was shut down in late 2019 and hadn’t been updated since 2012. <a href="#fnref:3" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Matching runners
https://bldavies.com/blog/matching-runners/
Sat, 21 Mar 2020 00:00:00 +0000https://bldavies.com/blog/matching-runners/<p>Running in pairs (and, more generally, in groups) can be more rewarding than running alone.
Running buddies can motivate each other, share the mental load of maintaining pace, and provide competition and accountability.</p>
<p>The main problem with running buddies is that they can be hard to find.
Not everyone is a runner, and runners vary in their abilities and training goals.
Moreover, these abilities and goals are mostly unobservable by other runners searching for a buddy.
This unobservable variation creates “matching frictions” that prevent runners from sorting into “optimal” pairs.</p>
<p>If prospective running buddies could observe each others’ abilities and training goals then they could form preferences over whom they want to be paired with.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>
Runners could rank potential buddies from most to least prefered, and submit these rankings to a central match-maker (e.g., a team coach) whose task would be to partition the (assumedly even-sized) set of <code>\(2n\)</code> runners into <code>\(n\)</code> pairs.
The socially optimal partition <code>\(\mathcal{P}^*\)</code> would minimise the sum
<code>$$S(\mathcal{P})=\sum_{\{i,j\}\in\mathcal{P}}(x_{ij}+x_{ji}),$$</code>
where <code>\(x_{ij}\)</code> is the rank that runner <code>\(i\)</code> assigns to potential buddy <code>\(j\)</code>.
Minimising <code>\(S(\mathcal{P})\)</code> would ensure that, on average, runners are paired with their most preferred buddies.</p>
<p>Let <code>\(X=(x_{ij})\)</code> be the matrix of preference rankings and let <code>\(Y=X+X^T\)</code>.
One way to find <code>\(\mathcal{P^*}\)</code> would be to choose <code>\(2n\)</code> entries of <code>\(Y\)</code> such that
(a) the sum of the chosen entries is minimised, and
(b) each row and column of <code>\(Y\)</code> contains exactly one chosen entry.<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>
This choice problem is equivalent to the (balanced) <a href="https://en.wikipedia.org/wiki/Assignment_problem">assignment problem</a>, and can be solved using the <a href="https://en.wikipedia.org/wiki/Hungarian_algorithm">Hungarian algorithm</a> or via linear programming.</p>
<p>The socially optimal partition <code>\(\mathcal{P^*}\)</code> of the set of runners into pairs may be “unstable:” there may exist two runners who would rather run with each other than with their assigned buddies.<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>
For example, suppose there is a set <code>\(\{a, b, c, d\}\)</code> of four runners with preference rankings captured by the matrix
<code>$$X=\begin{bmatrix} & 1 & 2 & 3 \\ 2 & & 1 & 3 \\ 1 & 2 & & 3 \\ 1 & 2 & 3 & \end{bmatrix}.$$</code>
The corresponding matrix <code>\(Y=X+X^T\)</code> of bidirectional sums is
<code>$$Y=\begin{bmatrix} & 3 & 3 & \underline{4} \\ 3 & & \underline{3} & 5 \\ 3 & \underline{3} & & 6 \\ \underline{4} & 5 & 6 & \end{bmatrix},$$</code>
where the underlined entries correspond to the socially optimal partition <code>\(\mathcal{P}^*=\{\{a,d\},\{b,c\}\}\)</code> with <code>\(S(\mathcal{P}^*)=14\)</code>.
This partition is unstable because runner <code>\(a\)</code> would prefer to run with <code>\(c\)</code> than <code>\(d\)</code>, and runner <code>\(c\)</code> is indifferent between runners <code>\(a\)</code> and <code>\(b\)</code>.
Runners <code>\(a\)</code> and <code>\(c\)</code> would ignore the match-maker and become buddies, resulting in a socially inferior partition <code>\(\mathcal{P}_*=\{\{a,c\},\{b,d\}\}\)</code> with <code>\(S(\mathcal{P}_*)=16\)</code>.</p>
<p>If the socially optimal partition of runners into pairs is unstable then the match-maker would need to prevent, or at least discourage, so-called “blocking pairs” from deviating from the optimum.
For example, the match-maker could restrict runners’ access to training areas (e.g., running tracks and trails) so that no blocking pairs have concurrent access.
However, such restrictions may be detrimental to runners’ training and camaraderie, and, consequently, reduce social welfare.</p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>For example, if I wanted to improve my pace then I might prefer to run with someone slightly faster than me so that I can try to match their speed without them racing ahead of me. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>Equivalently, one could choose <code>\(n\)</code> entries in the upper-right triangle of <code>\(Y\)</code> that satisfy criteria (a) and (b). This works because <code>\(Y\)</code> is symmetric. If <code>\(y_{ij}\)</code> is chosen when minimising sums over <code>\(Y\)</code> then <code>\(y_{ji}\)</code> is chosen when minimising sums over <code>\(Y^T\)</code>. But <code>\(Y=Y^T\)</code>, so the sets of chosen entries when minimising over <code>\(Y\)</code> and <code>\(Y^T\)</code> must be equal. Thus, <code>\(y_{ij}\)</code> and <code>\(y_{ji}\)</code> must belong to both sets, and so the lower-left triangle can be ignored. <a href="#fnref:2" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:3" role="doc-endnote">
<p>On the other hand, I conjecture that every stable partition is socially optimal. <a href="#fnref:3" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Spotify Premium pricing
https://bldavies.com/blog/spotify-premium-pricing/
Tue, 17 Mar 2020 00:00:00 +0000https://bldavies.com/blog/spotify-premium-pricing/<p><a href="https://www.spotify.com/">Spotify</a> offers two music and podcast streaming services:
a free, online-only service, and
a paid “Premium” service with extra features like unlimited skips and offline playback.
Spotify earns some revenue from serving ads to free users, but most of its revenue (<a href="https://investors.spotify.com/financials/default.aspx">about 88% in 2019Q4</a>) comes from Premium subscriptions.
This revenue needs to cover Spotify’s fixed and variable costs, which include the costs of maintaining its servers and of paying royalties for streaming artists’ music.</p>
<p>Spotify’s profit function looks something like
<code>$$\pi(p)=n\theta(p)(p-v_1)+n(1-\theta(p))(a-v_2)-f,$$</code>
where <code>\(p\)</code> is the price of subscribing to Spotify Premium, <code>\(n\)</code> is the number of Spotify users, <code>\(\theta(p)\)</code> is the price-dependent proportion of these users who pay for Premium, <code>\(a\)</code> is the revenue from serving ads to each free user, <code>\(v_1\)</code> and <code>\(v_2\)</code> are Spotify’s variable costs per Premium and free user, and <code>\(f\)</code> is Spotify’s fixed costs.
I assume that <code>\(\theta(p)\)</code> decreases with <code>\(p\)</code> so that Spotify Premium is an <a href="https://en.wikipedia.org/wiki/Ordinary_good">ordinary good</a>.</p>
<p>The profit-maximising price <code>\(p^*\)</code> satisfies the first-order condition (FOC)
<code>$$\begin{align} 0 &= \pi'(p^*) \\ &= n\theta'(p^*)(p^*-v_1)+n\theta(p^*)-n\theta'(p^*)(a-v_2), \end{align}$$</code>
where <code>\(\pi'\)</code> and <code>\(\theta'\)</code> denote the derivatives of <code>\(\pi\)</code> and <code>\(\theta\)</code> with respect to <code>\(p\)</code>.
If <code>\(a=0\)</code> and <code>\(v_1=v_2\)</code> then the FOC can be rewritten as
<code>$$\frac{p^*\theta'(p^*)}{\theta(p^*)}=-1,$$</code>
which means that, at <code>\(p=p^*\)</code>, the demand for Spotify Premium is unit elastic with respect to its price.
If free users generate no ad revenue and have the same variable costs per user as Premium subscribers, then Spotify should raise its Premium price until the increased revenue per Premium subscriber exactly offsets the decrease in such subscribers.
In contrast, if <code>\(a>0\)</code> or if <code>\(v_1>v_2\)</code> then Spotify must raise <code>\(p^*\)</code> further to decrease <code>\(\theta(p^*)\)</code> and avoid the lost ad revenue or increased variable costs from converting too many free users.</p>
<p>Notice that <code>\(\pi'(p^*)\)</code> is constant in <code>\(n\)</code>, so <code>\(p^*\)</code> does not change when <code>\(n\)</code> changes.
In contrast, assuming that the second derivative of <code>\(\pi\)</code> with respect to <code>\(p\)</code> is negative at <code>\(p^*\)</code> (so that <code>\(p^*\)</code> is profit-<em>maximising</em> rather than profit-<em>minimising</em>), the <a href="https://en.wikipedia.org/wiki/Implicit_function_theorem">implicit function theorem</a> implies that
<code>$$\frac{\partial p^*}{\partial a}=\frac{\partial p^*}{\partial v_1}>0>\frac{\partial p^*}{\partial v_2}.$$</code>
In words, the profit-maximising price is increasing in <code>\(a\)</code> and <code>\(v_1\)</code>, and decreasing in <code>\(v_2\)</code>.
Intuitively, if Spotify collects more ad revenue from free users then it can afford to lose some Premium subscribers by raising the Premium price.
Likewise, the greater is the difference between <code>\(v_1\)</code> and <code>\(v_2\)</code>, the more expensive it is to serve Premium subscribers relative to free users and so the fewer Premium subscriptions Spotify would prefer to sell.</p>
Stanford
https://bldavies.com/blog/stanford/
Fri, 13 Mar 2020 00:00:00 +0000https://bldavies.com/blog/stanford/<p>I am excited to announce that I will be moving to the United States later this year to pursue a PhD in economics at <a href="https://www.stanford.edu">Stanford University</a>.</p>
<p><a href="https://economics.stanford.edu/graduate/graduate-degree-programs">Stanford’s economics PhD program</a> ranks among the best in the world.
It begins with two years of advanced coursework on microeconomics, macroeconomics, econometrics, and field courses relevant to my academic interests.
This coursework will strengthen my technical and research skills, and prepare me for writing a PhD thesis that contributes substantively to the economic research literature.</p>
<p>One topic that interests me is how people overcome uncertainty when forming teams.
For example, the students in my cohort face uncertainty about who among the Stanford faculty will be the best supervisor(s) for their eventual theses.
Likewise, faculty members face uncertainty about which students will be the best candidates to supervise.
Participating in lectures and seminars will help students and faculty estimate their match qualities, leading to more informed and productive matches.</p>
<p>Another topic that interests me is how people share information in networks.
For example, my blog posts on <a href="https://bldavies.com/blog/information-gerrymandering/">information gerrymandering</a> and <a href="https://bldavies.com/blog/degroot-learning-social-networks/">DeGroot learning</a> use mathematical models to analyse how inter-personal connections influence peoples’ decisions and beliefs.
I am looking forward to learning more about these and related models, and their application to “real-world” social and economic systems.</p>
Uniform sums and Euler's number
https://bldavies.com/blog/uniform-sums-eulers-number/
Mon, 09 Mar 2020 00:00:00 +0000https://bldavies.com/blog/uniform-sums-eulers-number/<p>Suppose I sample values uniformly at random from the unit interval.
How many samples should I expect to take before the sum of my sampled values exceeds unity?<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup></p>
<p>Let <code>\(N\)</code> be the (random) number of samples taken when the sum first exceeds unity.
Then <code>\(N\)</code> has expected value <code>\(E[N]\)</code> equal to <a href="https://en.wikipedia.org/wiki/E_%28mathematical_constant%29">Euler’s number</a> <code>\(e\approx2.718282\)</code>.
This can be verified approximately via simulation:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">simulate</span> <span class="o"><-</span> <span class="nf">function</span><span class="p">(</span><span class="n">run</span><span class="p">)</span> <span class="p">{</span>
<span class="n">tot</span> <span class="o"><-</span> <span class="m">0</span>
<span class="n">N</span> <span class="o"><-</span> <span class="m">0</span>
<span class="nf">while </span><span class="p">(</span><span class="n">tot</span> <span class="o"><</span> <span class="m">1</span><span class="p">)</span> <span class="p">{</span>
<span class="n">tot</span> <span class="o"><-</span> <span class="n">tot</span> <span class="o">+</span> <span class="nf">runif</span><span class="p">(</span><span class="m">1</span><span class="p">)</span>
<span class="n">N</span> <span class="o"><-</span> <span class="n">N</span> <span class="o">+</span> <span class="m">1</span>
<span class="p">}</span>
<span class="n">N</span>
<span class="p">}</span>
<span class="nf">set.seed</span><span class="p">(</span><span class="m">0</span><span class="p">)</span>
<span class="nf">mean</span><span class="p">(</span><span class="nf">sapply</span><span class="p">(</span><span class="m">1</span><span class="o">:</span><span class="m">1e5</span><span class="p">,</span> <span class="n">simulate</span><span class="p">))</span>
</code></pre></div><pre><code>## [1] 2.7183
</code></pre><p>To see why <code>\(E[N]=e\)</code>, let <code>\((X_i)_{i=1}^\infty\)</code> be an infinite sequence of random variables with uniform distributions over the unit interval.
Then the probability that <code>\(N\)</code> exceeds any non-negative integer <code>\(n\)</code> is
<code>$$\Pr(N>n)=\Pr(X_1+X_2+\cdots+X_n<1).$$</code>
Consider the unit (hyper)cube in <code>\(\mathbb{R}^n\)</code>.
Its vertices comprise the origin, the standard basis vectors <code>\(e_1,e_2,\ldots,e_n\)</code>, and the sums of two or more of these basis vectors.
The convex hull of <code>\(\{0,e_1,e_2,\ldots,e_n\}\)</code> forms an <code>\(n\)</code>-simplex with volume <code>\(1/n!\)</code>.
The interior of this simplex is precisely the set
<code>$$\{X_1,X_2,\ldots,X_n\in[0,1]:X_1+X_2+\cdots+X_n<1\}.$$</code>
It follows that <code>\(\Pr(X_1+X_2+\cdots+X_n<1)=1/n!\)</code> and therefore <code>\(\Pr(N>n)=1/n!\)</code> from above.
Now
<code>$$\Pr(N=n)=\Pr(N>n-1)-\Pr(N>n)$$</code>
for each <code>\(n\ge1\)</code>.
Thus, since <code>\(\Pr(N>0)=1\)</code> (and, by convention, <code>\(0!=1\)</code>), we have
<code>$$\begin{align} E[N] &= \sum_{n=1}^\infty n\Pr(N=n) \\ &= \sum_{n=1}^\infty n\left(\Pr(N>n-1)-\Pr(N>n)\right) \\ &= \Pr(N>0)+\sum_{n=1}^\infty\Pr(N>n) \\ &= 1+\sum_{n=1}^\infty\frac{1}{n!} \\ &= \sum_{n=0}^\infty\frac{1}{n!} \\ &= e. \end{align}$$</code>
The final equality comes from evaluating the Maclaurin series expansion of <code>\(e^x\)</code> at <code>\(x=1\)</code>.</p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p><a href="https://www.3blue1brown.com">Grant Sanderson</a> mentions this problem in <a href="https://www.youtube.com/watch?v=6_yU9eJ0NxA&t=28m7s">this Numberphile video</a>. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Triadic closure at the NBER
https://bldavies.com/blog/triadic-closure-nber/
Wed, 04 Mar 2020 00:00:00 +0000https://bldavies.com/blog/triadic-closure-nber/<p><a href="https://academic.oup.com/jeea/article-abstract/8/1/203/2295936">Fafchamps et al. (2010)</a> describe a model of team formation in which people learn about potential collaborators via existing collaborators.
These “referrals” provide information about potential collaborators’ match qualities, allowing people to <a href="https://en.wikipedia.org/wiki/Screening_%28economics%29">screen</a> each other and sort into more productive teams.
Fafchamps et al. argue, and demonstrate empirically, that this referral mechanism leads to more teams being formed among people who are closer in the collaboration network.</p>
<p>Fafchamps et al.‘s referral model implies that triads in collaboration networks should tend to <a href="https://en.wikipedia.org/wiki/Triadic_closure">close</a> over time; that is, people should tend to collaborate with others with whom they share common collaborators.
One way to measure such closure is via the (global) <a href="https://en.wikipedia.org/wiki/Clustering_coefficient">clustering coefficient</a>, which measures the rate at which pairs of nodes with a common neighbour are also adjacent.
For example, in the <a href="https://bldavies.com/blog/nber-co-authorships/">NBER working paper co-authorship network</a>, about 15% of the pairs of authors who share common co-authors are co-authors themselves.
In contrast, we would expect this to happen 0.27% of the time in a <a href="https://bldavies.com/blog/degree-preserving-randomisation/">random network with the same degree distribution</a>, and 0.04% of the time in a random network with the same number of nodes and edges.
Thus, the NBER co-authorship network is much more clustered than would be expected if authors chose co-authors randomly.</p>
<p>Another way to measure triadic closure is by computing the rate at which pairs of nodes with common neighbours <em>become</em> adjacent.
This method makes sense whenever the network’s density grows over time.
Such growth occurs in the NBER co-authorship network through co-authorships of new working papers.
The network contains 32,034 pairs of eventual co-authors, 1,861 of whom share common co-authors at an earlier stage of the network’s evolution.
However, 340,235 of the 342,096 pairs of authors with common co-authors never become co-authors themselves.
Thus, only 0.54% of the unclosed triads in the NBER co-authorship network ever become closed.</p>
<p>How can we reconcile the NBER co-authorship network’s high clustering coefficient with its low triad closure rate?<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>
One explanation could be that referrals primarily attract collaborators on current projects rather than potential future projects.
Suppose I’m writing a paper with Alice, who suggests that Bob may have some valuable insights on our research, and that Bob and I might work well together.
It turns out that Bob does have valuable insights and that we do work well together, and Alice and I decide to make him a co-author on our paper.<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>
We publish our research as an NBER working paper, and Alice, Bob and I appear as a closed triad in the NBER co-authorship network (but never as an unclosed triad).</p>
<p>If intra-project closure is common then we would expect a high clustering coefficient and low triad closure rate in the NBER co-authorship network.
The open triads in the network would be the triads for which successful referrals did not occur during co-authorship, and the factors that prevented such referrals may persist after the paper is published.</p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>Researchers in the NBER co-authorship network may collaborate in ways not captured by the network. For example, working papers published in the NBER series must have at least one NBER-affiliated author, so papers written exclusively by non-affiliates are not observed in my data. If co-author referrals primarily lead non-affiliates to collaborate, and if such collaboration does not culminate in NBER working paper publications, then we would expect to observe a low triad closure rate. However, we would also expect a low (perhaps lower than 0.15) clustering coefficient because the triads containing non-affiliates would remain mostly open. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p><a href="https://doi.org/10.2307/1926798">Barnett et al. (1988)</a> and <a href="https://doi.org/10.1257/jel.51.1.162">Hamermesh (2013)</a> suggest that co-authorship is increasingly used as compensation for colleagues’ research assistance. <a href="#fnref:2" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Degree-preserving randomisation
https://bldavies.com/blog/degree-preserving-randomisation/
Mon, 17 Feb 2020 00:00:00 +0000https://bldavies.com/blog/degree-preserving-randomisation/<p><a href="https://bldavies.com/blog/centrality-rankings-noisy-edge-sets/">My previous post</a> used <a href="https://en.wikipedia.org/wiki/Degree-preserving_randomization">degree-preserving randomisation</a> (DPR) to control for network structure when estimating the effect of edge noise on nodes’ centrality rankings.
The idea was that nodes may be connected in ways that amplify or suppress the effects of noise, and randomising nodes’ connections helps to balance these effects by averaging over the network’s possible structures.</p>
<p>DPR can also be used to test whether a network’s structure is significantly different than would be expected for a random network with the same degree distribution.
For example, comparing a network’s clustering coefficient to the mean clustering coefficient among a sample of degree-preserving random networks reveals whether the original network is significantly more or less clustered than it would be, on average, if nodes’ connections were random.
In contrast to <a href="https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93R%C3%A9nyi_model">Erdös-Rényi</a> randomisation (ERR)—that is, generating a random network with the same number of nodes and edges—DPR separates variation in degree distributions from variation in other properties observed across sampled random networks.</p>
<p>Consider, as an example, the <a href="https://bldavies.com/blog/coauthorship-networks-motu/">Motu working paper co-authorship network</a>.
The table below presents the network’s median node degree, global <a href="https://en.wikipedia.org/wiki/Clustering_coefficient">clustering coefficient</a>, and <a href="https://en.wikipedia.org/wiki/Average_path_length">mean geodesic distance</a>.
The table also presents the sample means and standard deviations of these properties across 50 degree-preserving and Erdös-Rényi randomisations of the co-authorship network.</p>
<table>
<thead>
<tr>
<th align="left">Property</th>
<th align="center">Actual value</th>
<th align="center">DPR sample mean (sd)</th>
<th align="center">ERR sample mean (sd)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Median degree</td>
<td align="center">3.00</td>
<td align="center">3.00 (0.00)</td>
<td align="center">7.88 (0.33)</td>
</tr>
<tr>
<td align="left">Clustering coefficient</td>
<td align="center">0.52</td>
<td align="center">0.16 (0.01)</td>
<td align="center">0.04 (0.00)</td>
</tr>
<tr>
<td align="left">Mean distance</td>
<td align="center">2.72</td>
<td align="center">2.83 (0.03)</td>
<td align="center">2.74 (0.01)</td>
</tr>
</tbody>
</table>
<p>By definition, DPR preserves the degree distribution and, consequently, always delivers the same median degree as the co-authorship network.
In contrast, ERR removes the inequality in node degrees (arising, for example, from <a href="https://en.wikipedia.org/wiki/Preferential_attachment">preferential attachment</a>) and, consequently, delivers median degrees centred on the co-authorship network’s mean degree.</p>
<p>The co-authorship network is about 13 times more clustered than would be expected for an Erdös-Rényi random network with same number of nodes and edges.
Controlling for the degree distribution drops this factor to just over three.
In contrast, the mean distance between nodes in the co-authorship network is closer to what we would expect in a comparable Erdös-Rényi random network than in a degree-preserving random network.</p>
Centrality rankings with noisy edge sets
https://bldavies.com/blog/centrality-rankings-noisy-edge-sets/
Fri, 14 Feb 2020 00:00:00 +0000https://bldavies.com/blog/centrality-rankings-noisy-edge-sets/<p>Suppose I want to rank the centralities of nodes in a network.
The network’s node set is correct, but its edge set is “noisy” in that it includes some false edges and excludes some true edges.
How sensitive to this noise are the rankings of nodes from most to least central?</p>
<p>One way to answer this question empirically is to perturb an observable “true” network by adding and deleting edges randomly.
This can be achieved by generating an <a href="https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93R%C3%A9nyi_model">Erdös-Rényi (ER) random network</a> with the same node set as the true network, and defining a “noisy” network with edge set equal to the symmetric difference of the true and ER networks’ edge sets.
This method “swaps” the states (from “present” to “not present”, or vice versa) of the true network’s edges at random.
Varying the edge creation probablity in the ER network varies the amount of noise in the noisy network’s edge set.</p>
<p>I demonstrate this “random edge swapping” method by applying it to the <a href="https://bldavies.com/blog/coauthorship-networks-motu/">Motu working paper co-authorship network</a>.
First, I generate 30 ER networks and 30 corresponding noisy networks for a range of edge swap probabilities.
I then compute nodes’ betweenness, degree and PageRank centralities in the co-authorship networks with and without noise, and calculate the <a href="https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient">Spearman rank correlation</a> between the true and noisy centralities using each of the three measures.
Finally, I compute the sample means and 95% confidence intervals for the measure-specific rank correlations across the simulation runs associated with each edge swap probability.
I present these means and confidence intervals in the left panel of the plot below.
The right panel presents similar information, but with a <a href="https://bldavies.com/blog/degree-preserving-randomisation/">degree-preserving randomisation</a> of the co-authorship network within each simulation run before introducing noise.
This randomisation allows me to control for the effect of the co-authorship network’s structure on my rank correlation estimates.</p>
<p><img src="figures/correlations-1.svg" alt=""></p>
<p>Increasing the edge swap probability decreases the consistency between the true and noisy centrality rankings for each of the three centrality measures I analyse.
Intuitively, the more noise there is in the edge set, the less similar are the true and noisy co-authorship networks, and so the less correlated are the centrality rankings of the nodes in these networks.</p>
<p>Degree centrality rankings are the least sensitive to edge noise.
Adding or deleting edges moves the incident nodes up or down the degree rank order, but leaves the relative ranks among non-incident nodes intact.
Degree-preserving randomisation, by definition, does not affect nodes’ degree centrality rankings and so does not change the sensitivity of those rankings to noise.</p>
<p>PageRank centrality rankings are more sensitive to edge noise.
Since nodes’ PageRank centralities depend on the PageRank centralities of their neighbours, the effect of adding or deleting edges spills over to some non-incident nodes and, consequently, disrupts the PageRank rank order more than the degree rank order.
Controlling for network structure increases the influence that degree has on PageRank centrality and, consequently, decreases the sensitivity of PageRank centrality rankings to errant edges.</p>
<p>Betweenness centrality rankings are the most sensitive to edge noise.
Adding or deleting edges can create or destroy short(est) paths between nodes, leading to radical changes in betweenness centrality for nodes on these paths.
Controlling for network structure suppresses these changes by reducing the initial inequality in betweenness centralities.
About 71% of nodes in the true co-authorship network have betweenness centralities equal to zero, whereas 20% of nodes in the randomised networks have betweenness centralities equal to zero.
Consequently, nodes in the randomised networks typically have “less betweenness to gain or lose” than nodes in the true network, diminishing the effect of errant edges on betweenness centrality rankings.</p>
motuwp is now an R package
https://bldavies.com/blog/motuwp-package/
Sat, 08 Feb 2020 00:00:00 +0000https://bldavies.com/blog/motuwp-package/<p>My current project at Motu involves analysing co-authorship networks.
It is helpful for me to have a small example network that I can use to, for example, <a href="https://bldavies.com/blog/sampling-motu-coauthorship-network/">compare sampling techniques</a>.
The <a href="https://bldavies.com/blog/coauthorship-networks-motu/">Motu working paper co-authorship network</a> is my go-to.
Since I work mostly in R, I have converted the <a href="https://github.com/bldavies/motuwp">repository</a> containing the underlying authorship data to an R package.
This package can be installed from GitHub via <a href="https://github.com/r-lib/remotes">remotes</a>:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">remotes</span><span class="p">)</span>
<span class="nf">install_github</span><span class="p">(</span><span class="s">'bldavies/motuwp'</span><span class="p">)</span>
</code></pre></div><p>motuwp provides two data frames: <code>papers</code>, containing working paper attributes, and <code>authors</code>, containing author-paper pairs.
These pairs can be used to construct a co-authorship network as follows:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">igraph</span><span class="p">)</span>
<span class="nf">library</span><span class="p">(</span><span class="n">motuwp</span><span class="p">)</span>
<span class="c1"># Method 1: Project bipartite author-paper network onto author set</span>
<span class="n">bip</span> <span class="o"><-</span> <span class="nf">graph_from_data_frame</span><span class="p">(</span><span class="n">authors</span><span class="p">,</span> <span class="n">directed</span> <span class="o">=</span> <span class="bp">F</span><span class="p">)</span>
<span class="nf">V</span><span class="p">(</span><span class="n">bip</span><span class="p">)</span><span class="o">$</span><span class="n">type</span> <span class="o"><-</span> <span class="nf">V</span><span class="p">(</span><span class="n">bip</span><span class="p">)</span><span class="o">$</span><span class="n">name</span> <span class="o">%in%</span> <span class="n">authors</span><span class="o">$</span><span class="n">author</span>
<span class="n">net</span> <span class="o"><-</span> <span class="nf">bipartite_projection</span><span class="p">(</span><span class="n">bip</span><span class="p">,</span> <span class="n">which</span> <span class="o">=</span> <span class="s">'true'</span><span class="p">,</span> <span class="n">multiplicity</span> <span class="o">=</span> <span class="bp">F</span><span class="p">)</span>
<span class="c1"># Method 2: use convenience function that returns same network</span>
<span class="n">net</span> <span class="o"><-</span> <span class="nf">coauthorship_network</span><span class="p">()</span>
</code></pre></div><p>The co-authorship network <code>net</code> contains 185 nodes and 729 edges.
These values are larger than the corresponding values of 82 and 218 reported in <a href="https://bldavies.com/blog/sampling-motu-coauthorship-network/">my mid-2019 blog post</a> on the network.
The increases are due to me adding (i) the remaining working papers from 2019, (ii) some papers with missing landing pages, and (iii) authors with no hyperlinked profile page on Motu’s website.</p>
NBER (co-)authorships
https://bldavies.com/blog/nber-co-authorships/
Fri, 07 Feb 2020 00:00:00 +0000https://bldavies.com/blog/nber-co-authorships/<p>I recently updated the R package <a href="https://github.com/bldavies/nberwp">nberwp</a> to include data on NBER working paper authorships.
These data describe a bipartite author-paper network containing 13,571 authors and 26,586 papers.
On average, each author has 4.35 papers and each paper has 2.22 authors.</p>
<p>The co-authorship network among NBER authors—that is, the <a href="https://en.wikipedia.org/wiki/Bipartite_network_projection">bipartite projection</a> of the author-paper network onto the set of authors—contains 0.03% of the possible edges among the 13,571 authors in the network.
On average, each author has 4.72 unique co-authors across the working paper series.
About 95% of authors belong to a single connected component of the co-authorship network, while 139 authors have no co-authors.</p>
<p>One challenge that arises when constructing co-authorship networks is <a href="https://en.wikipedia.org/wiki/Author_name_disambiguation">disambiguating authors’ names</a>.
Slight misspellings may split a single author into many nodes, while many authors with the same name may be merged into a single node.
These false splits and merges inhibit one’s ability to draw robust inferences about collaborative behaviour from the co-authorship network’s structure.</p>
<p>It is easiest to disambiguate author names when they can be cross-referenced against other data.
The <a href="https://www.nber.org/RePEc/nbr/nberwo/">NBER RePEc index</a>, from which I extract the authorship data, links some authors to their <a href="http://repec.org/">RePEc</a> author IDs.
These IDs allow me to merge some authors who publish under varying names.
I also merge authors with (i) sufficiently similar names and (ii) overlapping neighbourhoods in the co-authorship network.
Criterion (i) assumes that authors’ names tend to vary from their true values by a few characters only, while criterion (ii) assumes that authors tend to write multiple papers with the same set of co-authors.
Combined, these criteria form a computationally feasible heuristic for identifying and resolving false splits.</p>
<p>In contrast, I do not attempt to identify false merges.
One method could be to look for authors who bridge otherwise distant parts of the co-authorship network.
This method assumes that authors tend to sort into clusters (e.g., by research interest) and that links between clusters are uncommon.
However, this assumption defies the empirical evidence that the co-authorship network among economists has a <a href="https://en.wikipedia.org/wiki/Small-world_network">small-world</a> structure (<a href="https://doi.org/10.1086/500990">Goyal et al., 2006</a>).</p>
DeGroot learning in social networks
https://bldavies.com/blog/degroot-learning-social-networks/
Mon, 27 Jan 2020 00:00:00 +0000https://bldavies.com/blog/degroot-learning-social-networks/<p>The first book on my reading list for 2020 was <a href="https://web.stanford.edu/~jacksonm/">Matthew Jackson</a>‘s <em>The Human Network</em>.
Its seventh chapter discusses <a href="https://en.wikipedia.org/wiki/DeGroot_learning">DeGroot learning</a> as a process for building consensus among members of a social network.</p>
<p>Consider a (strongly) connected social network among <code>\(n\)</code> people.
These people have private information that they use to form independent initial beliefs <code>\(b_1^{(0)},\ldots,b_n^{(0)}\)</code> about the value of some parameter <code>\(\theta\)</code>.
Recognising that their information sets may be incomplete, everyone updates their beliefs in discrete time steps by iteratively adopting the mean belief among their friends.
This process spreads the information available to each individual throughout the network, allowing peoples’ beliefs to converge to a consensus estimate <code>\(\hat\theta\)</code> of <code>\(\theta\)</code>.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup></p>
<p>The figure below presents an example of this setup.
It shows the social network among eight people after zero, one, two, and three time steps.
Nodes represent people, and are coloured according to the deviation of peoples’ beliefs above (orange) or below (purple) <code>\(\theta\)</code>'s true value (white).
Edges represent mutual friendships.
Over time, the information embedded in peoples’ initial beliefs diffuses throughout the network and the variation in beliefs around <code>\(\hat\theta\)</code> collapses to zero.</p>
<p><img src="figures/example-1.svg" alt=""></p>
<p>People with more friends have more influence on the consensus estimate because they have more avenues through which to spread information.
One can formalise this claim as follows.
Let <code>\(b^{(t)}=(b_1^{(t)},\ldots,b_n^{(t)})\)</code> be the <code>\(n\times 1\)</code> vector of time <code>\(t\)</code> beliefs.
This vector evolves according to
<code>$$b^{(t+1)}=Wb^{(t)},$$</code>
where <code>\(W=(W_{ij})\)</code> is a row-stochastic <code>\(n\times n\)</code> matrix with entries <code>\(W_{ij}\)</code> equal to the (time-invariant) weight that person <code>\(i\)</code> assigns to the beliefs of person <code>\(j\)</code> at each time step.
Notice that <code>\(b^{(t)}=W^tb^{(0)}\)</code> and so the <code>\(n\times1\)</code> vector <code>\(b^{(\infty)}=(\hat\theta,\ldots,\hat\theta)\)</code> of consensus estimates is given by
<code>$$b^{(\infty)}=\lim_{t\to\infty}W^tb^{(0)}.$$</code></p>
<p>In the context of DeGroot learning in social networks, we have
<code>$$W_{ij}=\frac{A_{ij}+I_{ij}}{d_i+1},$$</code>
where <code>\(A=(A_{ij})\)</code> is the adjacency matrix for the social network,
<code>\(d_i=\sum_{j=1}^nA_{ij}\)</code> is person <code>\(i\)</code>'s degree in that network,
and <code>\(I=(I_{ij})\)</code> is the <code>\(n\times n\)</code> identity matrix.
Adding one in the numerator (if <code>\(i=j\)</code>) and denominator reflects person <code>\(i\)</code> including their own beliefs when computing the mean among their friends.</p>
<p>The matrix <code>\(W\)</code> describes a <a href="https://en.wikipedia.org/wiki/Markov_chain">Markov chain</a> <code>\(\mathcal{M}\)</code> on the set of <code>\(n\)</code> people.
Assuming that the social network is (strongly) connected implies that <code>\(\mathcal{M}\)</code> is irreducible and aperiodic.
It follows from the <a href="https://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius_theorem">Perron-Frobenius theorem</a> that
<code>$$\lim_{t\to\infty}W^t=1_n\pi,$$</code>
where <code>\(1_n\)</code> is the <code>\(n\times1\)</code> vector of ones and <code>\(\pi\)</code> is a <code>\(1\times n\)</code> row vector corresponding to the unique stationary distribution of <code>\(\mathcal{M}\)</code>; that is, <code>\(\pi\)</code> uniquely solves
<code>$$\pi W=\pi$$</code>
subject to the constraints that <code>\(\pi_j\ge0\)</code> for each <code>\(j\)</code> and <code>\(\sum_{j=1}^n\pi_j=1\)</code>.</p>
<p>Now, let <code>\(v\)</code> be the <code>\(1\times n\)</code> row vector with entries <code>\(v_j=(d_j+1)/\sum_{k=1}^n(d_k+1)\)</code>.
Then <code>\(v_j\ge0\)</code> for each <code>\(j\)</code> and <code>\(\sum_{j=1}^nv_j=1\)</code>.
Moreover, since <code>\(A\)</code> is symmetric (and so <code>\(d_j=\sum_{i=1}^nA_{ij}\)</code>),
<code>$$\begin{align} (v W)_j &=\sum_{i=1}^nv_iW_{ij}\\ &=\sum_{i=1}^n\frac{d_i+1}{\sum_{k=1}^n(d_k+1)}\frac{A_{ij}+I_{ij}}{{d_i+1}}\\ &=\frac{d_j+1}{\sum_{k=1}^n(d_k+1)}\\ &=v_j \end{align}$$</code>
for each <code>\(j\)</code> so that <code>\(vW=v\)</code> and therefore <code>\(\pi=v\)</code> by uniqueness.
Thus, the consensus estimate is given by
<code>$$\hat\theta=\frac{\sum_{k=1}^n(d_k+1)b_k^{(0)}}{\sum_{k=1}^n(d_k+1)}.$$</code>
Finally, the influence that person <code>\(i\)</code> has on <code>\(\hat\theta\)</code> is captured by the partial derivative
<code>$$\frac{\partial\hat\theta}{\partial b_i^{(0)}}=\frac{d_i+1}{\sum_{k=1}^n(d_k+1)},$$</code>
which is an increasing linear function of person <code>\(i\)</code>'s degree <code>\(d_i\)</code> in the social network.</p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>Convergence is guaranteed if the social network is strongly connected <a href="https://doi.org/10.1257/mic.2.1.112">(Golub and Jackson, 2005)</a>. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
White elephant gift exchanges
https://bldavies.com/blog/white-elephant-gift-exchanges/
Wed, 11 Dec 2019 00:00:00 +0000https://bldavies.com/blog/white-elephant-gift-exchanges/<p>Motu’s staff Christmas party is this Friday.
We’re planning a <a href="https://en.wikipedia.org/wiki/White_elephant_gift_exchange">white elephant gift exchange</a>: everyone contributes a wrapped gift to a common pool and sequentially chooses to either (i) unwrap a gift or (ii) steal a previously unwrapped gift.
“Victims” of theft make the same choice, but previously stolen gifts cannot be re-stolen until a new gift is unwrapped.
The exchange ends when the last gift is unwrapped.</p>
<p>Suppose I want to maximise the subjective value of the gift in my possession when the exchange ends.
I must overcome two strategic challenges:
I don’t know the subjective values of wrapped gifts, and
I don’t know other players’ subjective values of wrapped <em>or unwrapped</em> gifts.
Therefore, any strategy I adopt must account for uncertainty both in wrapped gifts’ subjective values and in the propensity of other players to steal unwrapped gifts I covet.</p>
<p>One strategy could be to always steal the unwrapped gift with the highest subjective value.
This strategy is risky because my subjective valuations might correlate with those of other players, making it more likely I will become a victim of theft.
I could hedge this risk by instead always stealing the unwrapped gift with the <em>second</em> highest subjective value (unless I’m the last player, in which case I would be better off stealing the most subjectively valuable gift because it can’t be re-stolen).
Alternatively, I could play as a pacifist and never steal (unless I’m the last player).</p>
<p>I compare these three strategies—greediness, hedged greediness, and pacifism—via simulation.
I assume gifts’ subjective values are determined as the mean of two <a href="https://en.wikipedia.org/wiki/Continuous_uniform_distribution#Standard_uniform">standard uniform</a> random variables: one describing an underlying value common to all players, and one describing an idiosyncratic component unique to each player.
I simulate 1000 games among 30 players, randomising the strategies adopted by each player in each game.</p>
<p>For each simulated game, I compute the subjective value of the gift in each player’s possession when the exchange ends.
I also compute the allocation that maximises aggregate (i.e., the sum of) subjective values.
I refer to the subjective values in this allocation as “efficiency baselines,” and use them to compare strategies’ tendencies to deliver socially optimal allocations.
I summarise my simulation results in the plot below.</p>
<p><img src="figures/plot-1.svg" alt=""></p>
<p>Across all strategies, players whose turns arrive later in the game tend to be better off.
Such players have more choices of gifts to steal and fewer opportunities to become victims of theft.
Greedier players tend to end up with more subjectively valuable gifts, while pacifists—who never use victimisation as an opportunity to “trade up”—typically possess the least subjectively valuable gifts when the exchange ends.
Only late and/or greedy players tend to do better than under the socially optimal allocation.</p>
<p>Choosing not to steal is risky because it may result in unwrapping a low-value gift that no other players want to steal.
The first player, who cannot steal, is particularly exposed to this risk.
The game could be made fairer by allowing the first player (and subsequent victims) to unilaterally swap gifts when everyone else has had their turn.
This adjustment shifts the disadvantage to the second player, who, in the game’s pre-swap phase, has only two choices: steal from the first player or unwrap a new gift.
Giving more players a second turn could improve the final gift allocation by giving early players a larger choice set.</p>
<p>The table below shows how the efficiency and equity of the final gift allocation varies with the number of early players given a second turn.
I measure efficiency by the ratio of aggregate subjective values to aggregate efficiency baselines.
I define equity as one minus the <a href="https://en.wikipedia.org/wiki/Gini_coefficient">Gini coefficient</a> for the distribution of subjective values.
The table reports 95% confidence intervals across 1000 simulated games.</p>
<table>
<thead>
<tr>
<th align="center">Players given second turn</th>
<th align="center">Efficiency (%)</th>
<th align="center">Equity (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center">0</td>
<td align="center">83.9 ± 0.2</td>
<td align="center">80.3 ± 0.2</td>
</tr>
<tr>
<td align="center">1</td>
<td align="center">88.8 ± 0.2</td>
<td align="center">83.0 ± 0.2</td>
</tr>
<tr>
<td align="center">2</td>
<td align="center">88.7 ± 0.2</td>
<td align="center">82.8 ± 0.2</td>
</tr>
<tr>
<td align="center">3</td>
<td align="center">88.6 ± 0.2</td>
<td align="center">82.7 ± 0.2</td>
</tr>
<tr>
<td align="center">4</td>
<td align="center">88.6 ± 0.2</td>
<td align="center">82.6 ± 0.2</td>
</tr>
<tr>
<td align="center">5</td>
<td align="center">88.3 ± 0.2</td>
<td align="center">82.4 ± 0.2</td>
</tr>
</tbody>
</table>
<p>Giving the first player a second turn makes the final allocation more efficient and more equitable.
That player gets a chance to improve upon their initial endowment, and subsequent victims get a chance to reconsider their choices with more information about the distribution of gifts’ subjective values.
However, on average, giving further players a second turn appears to push efficiency and equity back down.</p>
Birds, voting, and Russian interference
https://bldavies.com/blog/birds-voting-russian-interference/
Sun, 17 Nov 2019 00:00:00 +0000https://bldavies.com/blog/birds-voting-russian-interference/<p>Since 2005, <a href="https://www.forestandbird.org.nz">Forest and Bird</a> has run annual elections for New Zealand’s <a href="https://www.birdoftheyear.org.nz">Bird of the Year</a>.
This week Radio New Zealand <a href="https://www.rnz.co.nz/news/national/402986/bird-of-the-year-2019-hoiho-takes-the-winning-title">announced</a> the <a href="https://en.wikipedia.org/wiki/Yellow-eyed_penguin">yellow-eyed penguin</a> as 2019’s winner.
A follow-up <a href="https://twitter.com/Forest_and_Bird/status/1193720097283567616">tweet</a> by Forest and Bird <a href="https://www.rnz.co.nz/news/national/403085/bird-of-the-year-russian-interest-in-contest-piques-suspicions-online">raised suspicions</a> about possible Russian interference into the vote’s outcome.</p>
<p>Forest and Bird’s tweet includes a world map with countries coloured by voter turnout.
The bar chart below presents the same information in a less exciting format.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup></p>
<p><img src="figures/countries-1.svg" alt=""></p>
<p>Russian votes account for 193 of the 15,044 votes with known country of origin.
New Zealand contributed 12,651 such votes.
Fully 28,416 votes had unknown origin and were excluded from the set of votes used to determine the winning bird.</p>
<p>This year’s election used an <a href="https://en.wikipedia.org/wiki/Instant-runoff_voting">instant-runoff</a> system.
Voters reported up to five of their favorite birds, ranked in order of preference.
Beginning with voters’ first preferences, birds with the least votes were eliminated sequentially and their votes reallocated to voters’ next favorites.
This process continued until one bird remained.</p>
<p>The table below reports the last five birds eliminated by the instant-runoff process among the votes cast from anywhere, from known countries, from New Zealand, from Russia, and from known countries excluding Russia.
The bracketed percentages represent the share of voters from each country who preferred the top two candidates in the final round.
For example, 61.6% of New Zealanders with preferences over the yellow-eyed penguin and the kākāpō preferred the former.</p>
<table>
<thead>
<tr>
<th align="center">Place</th>
<th align="center">All countries</th>
<th align="center">Known countries</th>
<th align="center">New Zealand</th>
<th align="center">Russia</th>
<th align="center">Known countries ex. Russia</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center">1</td>
<td align="center">Yellow-eyed penguin (52.4%)</td>
<td align="center">Yellow-eyed penguin (58.7%)</td>
<td align="center">Yellow-eyed penguin (61.6%)</td>
<td align="center">Kākāpō (52.0%)</td>
<td align="center">Yellow-eyed penguin (59.0%)</td>
</tr>
<tr>
<td align="center">2</td>
<td align="center">Kākāpō (47.6%)</td>
<td align="center">Kākāpō (41.3%)</td>
<td align="center">Kākāpō (38.4%)</td>
<td align="center">Black Robin (48.0%)</td>
<td align="center">Kākāpō (41.0%)</td>
</tr>
<tr>
<td align="center">3</td>
<td align="center">Black Robin</td>
<td align="center">Banded Dotterel</td>
<td align="center">Banded Dotterel</td>
<td align="center">Barn Owl</td>
<td align="center">Banded Dotterel</td>
</tr>
<tr>
<td align="center">4</td>
<td align="center">Banded Dotterel</td>
<td align="center">Black Robin</td>
<td align="center">Black Robin</td>
<td align="center">Antipodean Albatross</td>
<td align="center">Black Robin</td>
</tr>
<tr>
<td align="center">5</td>
<td align="center">Fantail</td>
<td align="center">Kākā</td>
<td align="center">Fantail</td>
<td align="center">Southern Brown Kiwi</td>
<td align="center">Kākā</td>
</tr>
</tbody>
</table>
<p>Excluding votes from unknown countries did not affect which bird won.
New Zealand voters got the outcome for which they voted, whereas Russian voters would have crowned the kākāpō.
Removing Russian votes wouldn’t have changed the election outcome—to the extent that Russians did interfere with the vote, their interference was not successful.</p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>The data used in this post are copyright Forest and Bird, and are released under a <a href="https://creativecommons.org/licenses/by/4.0/">CC BY 4.0</a> license. They are available <a href="https://www.dragonfly.co.nz/news/2019-11-12-boty.html">here</a>. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
How central is Grand Central Terminal?
https://bldavies.com/blog/how-central-grand-central-terminal/
Thu, 14 Nov 2019 00:00:00 +0000https://bldavies.com/blog/how-central-grand-central-terminal/<p>I spent most of October travelling in the United States.
I visited a range of large cities with correspondingly large subway systems.
New York City’s is the most extensive, containing <a href="https://www.citymetric.com/transport/what-largest-metro-system-world-1361">more stops than any other subway system in the world</a>.
Its <a href="http://www.grandcentralterminal.com">crown jewel</a>, Grand Central Terminal, provides access to many cultural and commercial attractions in Midtown Manhattan.</p>
<p>But just how central is Grand Central?</p>
<p>To help me answer this question, I created an R package <a href="https://github.com/bldavies/nyctrains">nyctrains</a> that provides data on the NYC subway network.
These data include scheduled travel times between subway stops.
I use these times to construct a travel-time-weighted directed network in which stops are adjacent if they occur consecutively along any route.
I exclude stops along the Staten Island Railway, which is disconnected from the rest of the system.
The plot below maps the resulting network, with nodes positioned by latitude/longitude and with edges coloured by route.
(Some routes overlap.)</p>
<p><img src="figures/map-1.svg" alt=""></p>
<p>Estimating Grand Central’s centrality requires choosing a measure.
One candidate is <a href="https://en.wikipedia.org/wiki/Betweenness_centrality">betweenness centrality</a>.
Stops are more betweennness-central if trains are more likely to pass through them when taking the fastest route between other stops.</p>
<p>Another candidate measure is <a href="https://en.wikipedia.org/wiki/Closeness_centrality">closeness centrality</a>.
Stops are more (out-)closeness-central if they have shorter mean fastest travel times to all other stops.
In the NYC subway network, some of these times are infinite because the network is not <a href="https://en.wikipedia.org/wiki/Strongly_connected_component">strongly connected</a>.
For example, it is not possible to get from Grand Central to <a href="https://subwaynut.com/ind/aqueduct_racetracka/index.php">Aqueduct Racetrack</a> without exiting the subway system.</p>
<p>Closeness centrality measures the extent to which stops provide fast access to other stops.
Another way to measure such access is to count the number of stops that can be reached within a specified time.
For example, the chart below shows the number of stops that can be reached from Grand Central and Broadway Junction within an hour.</p>
<p><img src="figures/reach-1.svg" alt=""></p>
<p>The number of stops reachable from Grand Central dominates the corresponding number from Brooklyn Junction for all but the smallest travel time allowances.
One way to operationalise this fact is to observe that the area below the red curve exceeds the area below blue curve.
In general, the area below the cumulative reach curve is larger for stops that provide access to more stops in less time.
I compute this area for each stop as a measure of what I call “reach” centrality.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup></p>
<p>The table below reports betweenness and reach centralities for the ten most betweenness-central stops in the NYC subway network, excluding stops on Staten Island.
I normalise centralities to have maximum values equal to unity.</p>
<table>
<thead>
<tr>
<th align="center">Stop</th>
<th align="center">Borough</th>
<th align="center">Betweenness rank (value)</th>
<th align="center">Reach rank (value)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center">Lexington Av / 59 St</td>
<td align="center">Manhattan</td>
<td align="center">1 (1.000)</td>
<td align="center">23 (0.973)</td>
</tr>
<tr>
<td align="center">125 St</td>
<td align="center">Manhattan</td>
<td align="center">2 (0.975)</td>
<td align="center">118 (0.870)</td>
</tr>
<tr>
<td align="center">Jay St - MetroTech</td>
<td align="center">Brooklyn</td>
<td align="center">3 (0.959)</td>
<td align="center">46 (0.951)</td>
</tr>
<tr>
<td align="center">86 St</td>
<td align="center">Manhattan</td>
<td align="center">4 (0.952)</td>
<td align="center">81 (0.926)</td>
</tr>
<tr>
<td align="center">Atlantic Av-Barclays Ctr</td>
<td align="center">Brooklyn</td>
<td align="center">5 (0.851)</td>
<td align="center">92 (0.914)</td>
</tr>
<tr>
<td align="center">149 St - Grand Concourse</td>
<td align="center">Bronx</td>
<td align="center">6 (0.794)</td>
<td align="center">158 (0.814)</td>
</tr>
<tr>
<td align="center">Grand Central - 42 St</td>
<td align="center">Manhattan</td>
<td align="center">7 (0.777)</td>
<td align="center">3 (0.991)</td>
</tr>
<tr>
<td align="center">14 St - Union Sq</td>
<td align="center">Manhattan</td>
<td align="center">8 (0.774)</td>
<td align="center">1 (1.000)</td>
</tr>
<tr>
<td align="center">Court Sq - 23 St</td>
<td align="center">Queens</td>
<td align="center">9 (0.763)</td>
<td align="center">42 (0.953)</td>
</tr>
<tr>
<td align="center">Broadway Junction</td>
<td align="center">Brooklyn</td>
<td align="center">10 (0.747)</td>
<td align="center">172 (0.802)</td>
</tr>
</tbody>
</table>
<p>Grand Central is the third most reach-central stop but only the seventh most betweeness-central, contributing to 22% fewer shortest paths than Lexington Avenue/59th Street station.
Broadway Junction is less reach-central than Grand Central—consistent with the chart above—but almost as betweeness-central.
The figure below shows the distribution of betweenness and reach centrality across the 424 stops in the network.</p>
<p><img src="figures/comparison-1.svg" alt=""></p>
<p>Betweenness-central nodes belong to many shortest paths, and so tend to congregate along bottlenecks and highways.
For example, seven of the ten most betweenness-central stops in the NYC subway network provide access to the Lexington Avenue Express (routes 4, 5 and 5X), which is the fastest—but not only—route between Brooklyn and the Bronx.
In contrast, reach centrality emanates from mid/lower Manhattan, which (i) is geographically dense with mutually nearby subway stops and (ii) contains the fastest inter-borough connections.</p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>This approach could be improved by adjusting for variation in stops’ access to unique amenities so that some stops are more valuable to reach than others. However, this variation is not observable in my data. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Climate change and transport planning
https://bldavies.com/blog/climate-change-transport-planning/
Wed, 06 Nov 2019 00:00:00 +0000https://bldavies.com/blog/climate-change-transport-planning/<p>Last year I organised a <a href="https://motu.nz/resources/dialogue-groups/">dialogue</a> on climate change adaptation within New Zealand’s transport sector.
The purpose of the dialogue was to facilitate discussions between researchers, stakeholders and government on adaptation issues relevant to the sector.
<a href="https://motu.nz/our-work/environment-and-resources/climate-change-impacts/climate-change-adaptation-within-new-zealands-transport-system/">Motu Note #40</a>, released today, summarises those discussions.</p>
<p>Climate change has uncertain supply-side impacts on transport because we don’t know when, where or how big will be the events (e.g., storms, floods and landslides) that threaten to damage our infrastructure.
These impacts affect other parts of the transport network by diverting flows away from damaged areas and putting pressure on alternative routes.</p>
<p>Climate change also has uncertain demand-side impacts on transport through impacts on sectors that use the network.
For example, climate change may trigger land use changes by altering the yields of different crops or the attractiveness of different settlement areas.
These changes shift the spatial allocation of human activity and, consequently, shift users’ derived demand for transport infrastructure.
However, it is unclear how people will vary their land use in response to climate change because such responses involve complex tradeoffs between economic, social and cultural factors.</p>
<p>The uncertainty around climate change impacts creates challenges for transport planners, who must forecast climate change itself, how people will respond to the change, and how those responses translate into spatial shifts in the derived demand for transport.</p>
<p>One solution is to apply real options analysis (ROA) to transport planning and investment decisions.
ROA extends traditional cost-benefit analyses by accounting for managerial flexibility in response to the realisation of future uncertainties, such as the time and place of climate change impacts.
For example, ROA provides tools for valuing the ability to abandon roads that get flooded during storms.
These tools help planners identify investments that meet users’ needs across a range of climate change scenarios.</p>
<p>However, real options provide an <a href="https://bldavies.com/blog/option-value-waiting/">incentive to delay</a> investments in order to draw more samples from, and thereby learn more about, the temporal and spatial distributions of climate change impacts.
Such delays halt investment decisions made by transport network users, who rely on the network to conduct economic and social activities, and by utility providers, who provide services that co-locate with transport infrastructure.</p>
<p>My coauthors and I discuss these issues further in <a href="https://motu.nz/our-work/environment-and-resources/climate-change-impacts/climate-change-adaptation-within-new-zealands-transport-system/">Motu Note #40</a>.</p>
Computing epicycles
https://bldavies.com/blog/computing-epicycles/
Sun, 03 Nov 2019 00:00:00 +0000https://bldavies.com/blog/computing-epicycles/<p>Earlier this year Grant Sanderson, creator of the YouTube channel <a href="https://www.3blue1brown.com">3blue1brown</a>, posted a <a href="https://www.youtube.com/watch?v=r6sGWTCMz2k">video</a> explaining how <a href="http://mathworld.wolfram.com/FourierSeries.html">Fourier series</a> approximate periodic functions using sums of sines and cosines.
In the video and its <a href="https://www.youtube.com/watch?v=-qgreAUpPwM">companion</a>, Grant animates sets of vectors that rotate on circular orbits and, when summed together, reproduce a range of images defined by closed curves.</p>
<p>Consider, for example, the boundary of GitHub’s logo:</p>
<p><img src="figures/plot-1.svg" alt=""></p>
<p>Let <code>\(\gamma:[0,1]\to\mathbb{R}^2\)</code> be the closed curve in <code>\(\mathbb{R}^2\)</code> defining the logo’s boundary.
Suppose there is an integer <code>\(n\)</code> such that
<code>$$\gamma(t) = \sum_{k=-n}^n \gamma_k(t)$$</code>
for some set of circular orbits <code>\(\gamma_{-n},\ldots,\gamma_n:[0,1]\to\mathbb{R}^2\)</code> and for all times <code>\(t\in[0,1]\)</code>.
(Negative and positive subscripts correspond to clockwise and anti-clockwise orbits.
Both may be necessary to reconstruct <code>\(\gamma\)</code>.)
Each orbit <code>\(\gamma_k\)</code> has time <code>\(t\)</code> position defined by the vector
<code>$$\gamma_k(t) = \begin{bmatrix} r_k \cos(2\pi k t + \theta_k) \\ r_k \sin(2\pi k t + \theta_k) \end{bmatrix}$$</code>
for some radius <code>\(r_k\)</code>, angular speed <code>\(2\pi k\)</code> rad/s and initial phase <code>\(\theta_k\)</code>.
Consequently, the curves <code>\(x,y:[0,1]\to\mathbb{R}\)</code> defining the horizontal and vertical components of <code>\(\gamma\)</code> must satisfy the system
<code>$$\begin{align} x(t) &= \sum_{k=-n}^n r_k\cos(2\pi k t + \theta_k) \\ y(t) &= \sum_{k=-n}^n r_k\sin(2\pi k t + \theta_k) \end{align}$$</code>
of identities.
Let <code>\(z:[0,1]\to\mathbb{C}\)</code> be the curve with <code>\(z(t)=x(t)+iy(t)\)</code> for all <code>\(t\in[0,1]\)</code>.
<a href="http://mathworld.wolfram.com/EulerFormula.html">Euler’s formula</a> gives
<code>$$\begin{align} z(t) &= \sum_{k=-n}^n r_k(\cos(2\pi k t + \theta_k) + i \sin(2\pi k t + \theta_k)) \\ &= \sum_{k=-n}^n r_k \exp(2\pi i k t + i\theta_k) \\ &= \sum_{k=-n}^n c_k \exp(2\pi i k t), \end{align}$$</code>
where each Fourier coefficient <code>\(c_k=r_k\exp(i\theta_k)\)</code> has modulus <code>\(\lvert c_k\rvert=r_k\)</code> and (principal) argument <code>\(\mathrm{Arg}(c_k)=\theta_k\)</code>.
Now, notice that
<code>$$\begin{align} \int_0^1 z(t) \exp(-2\pi i k t)\, \mathrm{d}\,t &= \int_0^1\left(\sum_{j=-n}^n c_j \exp(2\pi i j t)\right)\exp(-2\pi i k t)\, \mathrm{d}\,t \\ &= \int_0^1c_k\, \mathrm{d}\,t + \sum_{j\not=k} c_j \int_0^1 \exp(2\pi i (j - k)t)\, \mathrm{d}\,t \\ &= c_k \end{align}$$</code>
for each <code>\(k\)</code> because
<code>$$\int_0^1 \exp(2\pi i (j - k)t)\, \mathrm{d}\,t = 0$$</code>
for all integers <code>\(j\not=k\)</code> by the <code>\(2\pi i\)</code>-periodicity of the complex exponential function.
Thus
<code>$$c_k = \int_0^1 z(t) \exp(-2\pi i k t)\, \mathrm{d}\, t,$$</code>
which can be calculated using Riemann sums given sample points along the component curves <code>\(x\)</code> and <code>\(y\)</code>.
Doing this calculation for each <code>\(k\)</code>, and computing the corresponding moduli <code>\(r_{-n},\ldots,r_n\)</code> and arguments <code>\(\theta_{-n},\ldots,\theta_n\)</code>, provides enough information to generate the animation below.</p>
<p><img src="figures/animation-1.gif" alt=""></p>
Introducing nberwp
https://bldavies.com/blog/introducing-nberwp/
Tue, 24 Sep 2019 00:00:00 +0000https://bldavies.com/blog/introducing-nberwp/<p>Today I published <a href="https://github.com/bldavies/nberwp">nberwp</a>, an R package providing data on <a href="https://www.nber.org">NBER</a> working papers published between 1973 and 2018.
It can be installed from GitHub via <a href="https://github.com/r-lib/remotes">remotes</a>:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">remotes</span><span class="p">)</span>
<span class="nf">install_github</span><span class="p">(</span><span class="s">'bldavies/nberwp'</span><span class="p">)</span>
</code></pre></div><p>nberwp provides a data frame <code>papers</code>, each row describing a unique working paper:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">papers</span>
</code></pre></div><pre><code>## # A tibble: 25,413 x 4
## number year month title
## <int> <int> <int> <chr>
## 1 1 1973 6 Education, Information, and Efficiency
## 2 2 1973 6 Hospital Utilization: An Analysis of SMSA Differences in …
## 3 3 1973 6 Error Components Regression Models and Their Applications
## 4 4 1973 7 Human Capital Life Cycle of Earnings Models: A Specific S…
## 5 5 1973 7 A Life Cycle Family Model
## 6 6 1973 7 A Review of Cyclical Indicators for the United States: Pr…
## 7 7 1973 8 The Definition and Impact of College Quality
## 8 8 1973 9 Multinational Firms and the Factor Intensity of Trade
## 9 9 1973 9 From Age-Earnings Profiles to the Distribution of Earning…
## 10 10 1973 9 Monte Carlo for Robust Regression: The Swindle Unmasked
## # … with 25,403 more rows
</code></pre><p><code>number</code> uniquely identifies working papers by their positions in the series, while <code>year</code> and <code>month</code> capture papers’ publication dates.
The chart below uses these dates to show the NBER catalogue’s expansion.</p>
<p><img src="figures/papers-1.svg" alt=""></p>
<p><code>title</code> facilitates simple text mining, such as determining which words are used in working paper titles most frequently:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">tidytext</span><span class="p">)</span>
<span class="n">words</span> <span class="o"><-</span> <span class="n">papers</span> <span class="o">%>%</span>
<span class="nf">unnest_tokens</span><span class="p">(</span><span class="n">word</span><span class="p">,</span> <span class="n">title</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">anti_join</span><span class="p">(</span><span class="nf">get_stopwords</span><span class="p">())</span> <span class="o">%>%</span>
<span class="nf">filter</span><span class="p">(</span><span class="nf">nchar</span><span class="p">(</span><span class="nf">gsub</span><span class="p">(</span><span class="s">'[a-z.]'</span><span class="p">,</span> <span class="s">''</span><span class="p">,</span> <span class="n">word</span><span class="p">))</span> <span class="o">==</span> <span class="m">0</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">distinct</span><span class="p">(</span><span class="n">number</span><span class="p">,</span> <span class="n">word</span><span class="p">)</span>
<span class="n">words</span> <span class="o">%>%</span>
<span class="nf">count</span><span class="p">(</span><span class="n">word</span><span class="p">,</span> <span class="n">sort</span> <span class="o">=</span> <span class="bp">T</span><span class="p">)</span>
</code></pre></div><pre><code>## # A tibble: 11,636 x 2
## word n
## <chr> <int>
## 1 evidence 2615
## 2 policy 1350
## 3 market 1322
## 4 effects 1193
## 5 trade 1052
## 6 capital 979
## 7 labor 940
## 8 economic 910
## 9 u.s 882
## 10 health 875
## # … with 11,626 more rows
</code></pre><p>Many papers discuss capital and labour markets, and the effects of public policies.
The word “evidence” appears in twice as many titles as any other (non-stop) word, which I suspect reflects the growing use of the “<Issue>: Evidence from <context>” title format:</p>
<p><img src="figures/evidence-from-1.svg" alt=""></p>
<p>The NBER’s <a href="https://www.nber.org/RePEc/nbr/nberwo/">RePEc index</a>, from which I derive <code>papers</code>, also contains data linking papers to their authors.
I plan to include these data in a future version of nberwp once I’ve disambiguated authors’ names.</p>
Information gerrymandering
https://bldavies.com/blog/information-gerrymandering/
Sat, 14 Sep 2019 00:00:00 +0000https://bldavies.com/blog/information-gerrymandering/<p>Last week <em>Nature</em> published “<a href="https://doi.org/10.1038/s41586-019-1507-6">Information Gerrymandering and Undemocratic Decisions</a>,” an article analysing the effect of peer influences on the outcome of collective decisions.</p>
<p>Suppose, for example, that a 24-member committee must collectively decide whether to adopt a new policy.
The committee agrees to make the decision by vote, and will action whichever choice—accept or reject—wins a two-thirds majority.
One week before the vote, half of the committee members support the policy and half want it rejected.
Fearing stagnation, each member updates their position daily to match the majority among their six most trusted colleagues.
This update process allows committee members to influence each others’ positions, potentially shifting the split vote to a decisive majority.</p>
<p>Assuming trust is pairwise mutual, the “influence network” among committee members can be modelled as a 6-regular graph on 24 vertices, with edges connecting influencers.
The function below uses this regular graph model to simulate the outcome of many votes:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">simulate_votes</span> <span class="o"><-</span> <span class="nf">function</span><span class="p">(</span><span class="n">n_votes</span><span class="p">,</span> <span class="n">committee_size</span><span class="p">,</span> <span class="n">n_influences</span><span class="p">,</span> <span class="n">n_days</span><span class="p">)</span> <span class="p">{</span>
<span class="c1"># Create regular graph and identify neighbours</span>
<span class="n">net</span> <span class="o"><-</span> <span class="n">igraph</span><span class="o">::</span><span class="nf">k.regular.game</span><span class="p">(</span><span class="n">committee_size</span><span class="p">,</span> <span class="n">n_influences</span><span class="p">)</span>
<span class="n">nb</span> <span class="o"><-</span> <span class="n">igraph</span><span class="o">::</span><span class="nf">neighborhood</span><span class="p">(</span><span class="n">net</span><span class="p">)</span>
<span class="c1"># Define function for simulating one vote</span>
<span class="n">simulate_one</span> <span class="o"><-</span> <span class="nf">function</span><span class="p">(</span><span class="n">vote</span><span class="p">)</span> <span class="p">{</span>
<span class="n">accepts</span> <span class="o"><-</span> <span class="nf">vector</span><span class="p">(</span><span class="s">'double'</span><span class="p">,</span> <span class="n">n_days</span><span class="p">)</span>
<span class="n">init_positions</span> <span class="o"><-</span> <span class="nf">sample</span><span class="p">(</span><span class="nf">rep</span><span class="p">(</span><span class="nf">c</span><span class="p">(</span><span class="m">0</span><span class="p">,</span> <span class="m">1</span><span class="p">),</span> <span class="n">committee_size</span> <span class="o">%/%</span> <span class="m">2</span><span class="p">),</span> <span class="n">replace</span> <span class="o">=</span> <span class="bp">F</span><span class="p">)</span>
<span class="n">positions</span> <span class="o"><-</span> <span class="n">init_positions</span>
<span class="nf">for </span><span class="p">(</span><span class="n">day</span> <span class="n">in</span> <span class="nf">seq_len</span><span class="p">(</span><span class="n">n_days</span><span class="p">))</span> <span class="p">{</span>
<span class="n">positions</span> <span class="o"><-</span> <span class="n">purrr</span><span class="o">::</span><span class="nf">map_dbl</span><span class="p">(</span><span class="n">nb</span><span class="p">,</span> <span class="o">~</span><span class="p">(</span><span class="m">1</span> <span class="o">*</span> <span class="p">(</span><span class="nf">mean</span><span class="p">(</span><span class="n">positions[.]</span><span class="p">)</span> <span class="o">>=</span> <span class="m">0.5</span><span class="p">)))</span>
<span class="n">accepts[day]</span> <span class="o"><-</span> <span class="n">committee_size</span> <span class="o">*</span> <span class="nf">mean</span><span class="p">(</span><span class="n">positions</span><span class="p">)</span>
<span class="p">}</span>
<span class="nf">list</span><span class="p">(</span><span class="n">init_positions</span> <span class="o">=</span> <span class="n">init_positions</span><span class="p">,</span> <span class="n">accepts</span> <span class="o">=</span> <span class="n">accepts</span><span class="p">)</span>
<span class="p">}</span>
<span class="c1"># Simulate many votes</span>
<span class="n">votes</span> <span class="o"><-</span> <span class="nf">lapply</span><span class="p">(</span><span class="nf">seq_len</span><span class="p">(</span><span class="n">n_votes</span><span class="p">),</span> <span class="n">simulate_one</span><span class="p">)</span>
<span class="c1"># Return results</span>
<span class="nf">list</span><span class="p">(</span><span class="n">network</span> <span class="o">=</span> <span class="n">net</span><span class="p">,</span> <span class="n">results</span> <span class="o">=</span> <span class="n">votes</span><span class="p">)</span>
<span class="p">}</span>
</code></pre></div><p><code>simulate_one</code> randomises committee members’ initial positions—encoding “accept” as one and “reject” as zero—before updating these positions based on neighbouring majorities.
Running <code>simulate_one</code> many times allows me to simulate the committee’s decision for an ensemble of randomly generated influence networks.
The last few lines of <code>simulate_votes</code> generate this ensemble and output the simulation results.</p>
<p>Let’s simulate the committee’s vote 1000 times, including one week of daily position updates, and tabulate the simulated decision frequencies:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="c1"># Run simulations</span>
<span class="n">committee_size</span> <span class="o"><-</span> <span class="m">24</span>
<span class="nf">set.seed</span><span class="p">(</span><span class="m">0</span><span class="p">)</span>
<span class="n">votes</span> <span class="o"><-</span> <span class="nf">simulate_votes</span><span class="p">(</span><span class="m">1000</span><span class="p">,</span> <span class="n">committee_size</span><span class="p">,</span> <span class="m">6</span><span class="p">,</span> <span class="m">7</span><span class="p">)</span>
<span class="c1"># Define function for converting vote counts to committee decisions</span>
<span class="n">get_decision</span> <span class="o"><-</span> <span class="nf">function</span><span class="p">(</span><span class="n">accepts</span><span class="p">)</span> <span class="p">{</span>
<span class="n">dplyr</span><span class="o">::</span><span class="nf">case_when</span><span class="p">(</span>
<span class="n">accepts</span> <span class="o">>=</span> <span class="n">committee_size</span> <span class="o">*</span> <span class="m">2</span> <span class="o">/</span> <span class="m">3</span> <span class="o">~</span> <span class="s">'Accept'</span><span class="p">,</span>
<span class="n">accepts</span> <span class="o"><=</span> <span class="n">committee_size</span> <span class="o">/</span> <span class="m">3</span> <span class="o">~</span> <span class="s">'Reject'</span><span class="p">,</span>
<span class="kc">TRUE</span> <span class="o">~</span> <span class="s">'Deadlock'</span>
<span class="p">)</span>
<span class="p">}</span>
<span class="c1"># Tabulate decision frequencies</span>
<span class="nf">tibble</span><span class="p">(</span><span class="n">accepts</span> <span class="o">=</span> <span class="nf">map</span><span class="p">(</span><span class="n">votes</span><span class="o">$</span><span class="n">results</span><span class="p">,</span> <span class="o">~</span><span class="nf">tail</span><span class="p">(</span><span class="n">.$accepts</span><span class="p">,</span> <span class="m">1</span><span class="p">)))</span> <span class="o">%>%</span>
<span class="nf">mutate</span><span class="p">(</span><span class="n">Decision</span> <span class="o">=</span> <span class="nf">get_decision</span><span class="p">(</span><span class="n">accepts</span><span class="p">))</span> <span class="o">%>%</span>
<span class="nf">count</span><span class="p">(</span><span class="n">Decision</span><span class="p">,</span> <span class="n">name</span> <span class="o">=</span> <span class="s">'Frequency'</span><span class="p">)</span> <span class="o">%>%</span>
<span class="n">knitr</span><span class="o">::</span><span class="nf">kable</span><span class="p">(</span><span class="n">align</span> <span class="o">=</span> <span class="s">'c'</span><span class="p">)</span>
</code></pre></div><table>
<thead>
<tr>
<th align="center">Decision</th>
<th align="center">Frequency</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center">Accept</td>
<td align="center">292</td>
</tr>
<tr>
<td align="center">Deadlock</td>
<td align="center">429</td>
</tr>
<tr>
<td align="center">Reject</td>
<td align="center">279</td>
</tr>
</tbody>
</table>
<p>Variation in decisions comes from variation in the influence network’s structure.
To see how, let <code>\(\Delta_i\)</code> denote the proportion of committee member <code>\(i\)</code>'s influencers with the same initial position on the policy as member <code>\(i\)</code>, and define
<code>$$a_i = \begin{cases} \Delta_i & \text{if}\ \Delta_i\ge 1/2\\ -(1 - \Delta_i) & \text{otherwise}. \end{cases}$$</code>
The variable <code>\(a_i\)</code> captures the “influence assortment” of committee member <code>\(i\)</code>.
Positive influence assortment means that they mainly agree with their influencers; negative influence assortment means that they mainly disagree.</p>
<p>Now let <code>\(\mathcal{A}\)</code> and <code>\(\mathcal{R}\)</code> be the sets of committee members whose initial positions are to accept and reject the policy, and consider the difference
<code>$$G = \frac{1}{\lvert\mathcal{A}\rvert}\sum_{i\in\mathcal{A}} a_i - \frac{1}{\lvert\mathcal{R}\rvert}\sum_{j\in\mathcal{R}} a_j$$</code>
in mean influence assortments between these sets.
The “influence gap” <code>\(G\)</code> is greater than zero precisely when committee members in <code>\(\mathcal{A}\)</code> are, on average, more positively influence assorted than committee members in <code>\(\mathcal{R}\)</code>.</p>
<p>The scatter plot below shows that <code>\(G\)</code> correlates positively with the probability that the committee accepts the policy.
Intuitively, positive influence gaps characterise influence networks with disproportionately many neighbouring majorities in favour of acceptance, which, consequently, makes voting to accept the policy more likely.</p>
<p><img src="figures/correlation-1.svg" alt=""></p>
<p>The relationship between influence gaps and vote outcomes creates an incentive to <a href="https://en.wikipedia.org/wiki/Gerrymandering">gerrymander</a> the influence network to make preferred outcomes more likely.
For example, a subset of committee members wanting to accept the policy could cooperate to gain the trust of specific members so as to construct a positive influence gap.
In political and legal contexts (e.g. elections and jury votes), bad actors may act on the incentive to gerrymander voters’ influences and, in doing so, pervert the democratic process.</p>
<p>The <em>Nature</em> article extends my model in three ways:</p>
<ol>
<li>it generalises to directed influence networks by relaxing the assumption of pairwise mutual trust;</li>
<li>it uses a more elaborate rule for updating positions;</li>
<li>it introduces stubborn committee members (“zealots”) who never change their position.</li>
</ol>
<p>However, none of these extensions change the model’s prediction: gerrymandering influence networks can lead to undemocratic decision-making by biasing the outcome of otherwise-split votes.</p>
Sampling the Motu coauthorship network
https://bldavies.com/blog/sampling-motu-coauthorship-network/
Tue, 30 Jul 2019 00:00:00 +0000https://bldavies.com/blog/sampling-motu-coauthorship-network/<p>Suppose I have some data that describe a bipartite author-publication network.
I want to analyse the underlying coauthorship network—that is, the bipartite projection onto the set of authors—but I can’t compute that network because the data are too large to fit into memory.
Instead, I estimate <a href="https://en.wikipedia.org/wiki/Graph_property">properties</a> of the full coauthorship network by sampling the author-publication incidence data before computing the bipartite projection.</p>
<p>If the incidence data are stored as a matrix then I can sample its rows or columns, which corresponds to sampling the author or publication sets.
If the incidence data are stored as a list of author-publication pairs then I can sample these pairs, which corresponds to sampling edges in the bipartite network.</p>
<p>Which of these three methods—author, publication and edge sampling—most reliably estimates the full coauthorship network’s properties?</p>
<p>To develop some intuition, I apply each sampling method to <a href="https://bldavies.com/blog/coauthorship-networks-motu/">the coauthorship network among Motu researchers</a>.
The <a href="https://github.com/bldavies/motuwp">data</a> describing this network are small enough that I can compute the true values of various network properties, which I compare with the sampling distributions of such values generated by each sampling method.</p>
<p>The table below reports the 95% confidence intervals for each property under each method, in all cases sampling (uniformly at random and without replacement) about half of the corresponding entities (i.e., authors, publications or edges) before computing the bipartite projection onto the set of authors.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup></p>
<table>
<thead>
<tr>
<th align="left">Property</th>
<th align="right">True value</th>
<th align="right">Author sampling</th>
<th align="right">Pub. sampling</th>
<th align="right">Edge sampling</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Order</td>
<td align="right">82.00</td>
<td align="right">42.00 ± 0.00</td>
<td align="right">64.40 ± 0.60</td>
<td align="right">64.90 ± 0.60</td>
</tr>
<tr>
<td align="left">Size</td>
<td align="right">218.00</td>
<td align="right">56.10 ± 2.60</td>
<td align="right">137.40 ± 3.40</td>
<td align="right">81.80 ± 1.40</td>
</tr>
<tr>
<td align="left">Density (%)</td>
<td align="right">6.56</td>
<td align="right">6.50 ± 0.30</td>
<td align="right">6.70 ± 0.20</td>
<td align="right">4.00 ± 0.10</td>
</tr>
<tr>
<td align="left">Mean distance</td>
<td align="right">2.52</td>
<td align="right">2.60 ± 0.10</td>
<td align="right">2.70 ± 0.00</td>
<td align="right">3.10 ± 0.00</td>
</tr>
<tr>
<td align="left">Transitivity (%)</td>
<td align="right">30.91</td>
<td align="right">30.80 ± 2.00</td>
<td align="right">31.90 ± 1.30</td>
<td align="right">24.80 ± 1.20</td>
</tr>
</tbody>
</table>
<p>All three methods under-estimate the order and size of the full coauthorship network.
However, this is partly by construction: sampling any proportion of authors will always deliver that proportion of nodes in the coauthorship network, and taking a strict subset of publications or edges will generally omit some inter-author connections.</p>
<p>Author and publication sampling deliver accurate density and transitivity estimates.
Edge sampling is less accurate: it produces relatively sparse networks in which authors are more distant, and less likely to share common coauthors, than in the full network.</p>
<p>The chart below plots the sample means and 95% confidence intervals generated by each sampling method for varying sampling rates.
(A sampling rate of p% means that I randomly select p% of the corresponding entities before computing the coauthorship network.)
As the sampling rate rises, the sample means converge to the true value.
I vertically nudge the plotted points to prevent overlaps and make it easier to compare methods at each sampling rate.</p>
<p><img src="figures/convergence-1.svg" alt=""></p>
<p>Publication sampling over-estimates the coauthorship network’s density at low sampling rates.
This could be because most working papers are written by authors in the densely connected core of the coauthorship network, so publication sampling is more likely to recover this core than the less connected and less productive periphery.</p>
<p>Edge sampling appears to generate biased density and transitivity estimates.
Intuitively, pairs of sampled edges are unlikely to be incident with the same publication and thus unlikely to form an edge in the bipartite projection.</p>
<p>All three methods under-estimate the mean distance between authors at low sampling rates but over-estimate this distance at high sampling rates.
This pattern arises because the distance calculation considers connected nodes only.
At low sampling rates, most connected components are dyads or triads, and so the distances between connected nodes are small.
The number of nodes in each component rises with the sampling rate, which leads to mean distance over-estimates until the number of edges within each component catches up.</p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>Within each sample, I delete authors with no publications and publications with no authors. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Updating motuwp
https://bldavies.com/blog/updating-motuwp/
Sun, 28 Jul 2019 00:00:00 +0000https://bldavies.com/blog/updating-motuwp/<p>Today I updated the <a href="https://github.com/bldavies/motuwp">motuwp</a> GitHub repository, which stores data on Motu working papers and their authors.
I made three main changes:</p>
<p>First, I switched from <a href="https://www.crummy.com/software/BeautifulSoup/">BeautifulSoup</a> to <a href="https://rvest.tidyverse.org">rvest</a> for scraping the working paper directory.
My original Python <a href="https://github.com/bldavies/motuwp/blob/97c9074908367154fcdddb33d377feb45528e4ae/code/urls.py">script</a> used a bunch of regex commands to build the list of working paper URLs, despite warnings that <a href="https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags">regular expressions and HTML generally don’t cooperate</a>.
I should have just used CSS selectors, which I now do using <a href="https://github.com/bldavies/motuwp/tree/8f4b1c02e04f8e5e45b4325195bb4f03ac0ee707/code/data.R"><code>data.R</code></a>.</p>
<p>Second, I implemented a caching mechanism for passing information between runs of <code>data.R</code>.
The script queries only papers released since the last run, so adding new papers is faster and requires fewer HTTP requests.</p>
<p>Third, I added working paper titles to the information collected.
This allows me to, for example, use <a href="https://bldavies.com/blog/reading-ministerial-diaries/#computing-tf-idf-scores">tf-idf scores</a> to characterise research areas:</p>
<p><img src="figures/tf-idf-1.svg" alt=""></p>
College degrees in the US: Community detection
https://bldavies.com/blog/college-degrees-community-detection/
Sat, 27 Jul 2019 00:00:00 +0000https://bldavies.com/blog/college-degrees-community-detection/<p>In <a href="https://bldavies.com/blog/college-degrees-similarity-measures/">my last post</a>, I compared measures of similarity among college degree fields.
My goal in this post is to partition the set of fields such that each field has greater within-part similarities than between-part similarities.
One approach is to <a href="https://en.wikipedia.org/wiki/Hierarchical_clustering">hierarchically cluster</a> fields based on their similarities, producing a dendrogram that can be cut at different heights to obtain different partitions.
Generating the dendrogram restricts my choice set but, ultimately, I still have to choose which partition is “best.”</p>
<p>The intellectually honest way forward is to define an objective function on the set of partitions and choose the partition that obtains the function’s maximum.
One such function is network <a href="https://en.wikipedia.org/wiki/Modularity_%28networks%29">modularity</a>, which captures the extent to which groups of nodes are intra-connected densely but inter-connected sparsely.
Ranking partitions by modularity removes the need for supervision: rather than making a subjective, potentially biased judgment on which partition is “best,” I simply choose the partition that maximises modularity.</p>
<p>Unfortunately, <a href="https://arxiv.org/abs/physics/0608255">maximising modularity is hard</a>.
In most cases, finding the globally optimal partition is infeasible and a heuristic algorithm must be used to find an approximate solution.
<a href="https://arxiv.org/abs/cond-mat/0408187">Clauset et al. (2004)</a> suggest a <a href="https://en.wikipedia.org/wiki/Greedy_algorithm">greedy</a> algorithm:</p>
<ol>
<li>Assign every node to a unique “community.”</li>
<li>Find the pair of communities whose union delivers the greatest increase in modularity. Replace these communities with their union.</li>
<li>Repeat step 2 until the modularity gain is negative or only one community remains.</li>
</ol>
<p>The term “community” refers to a set of nodes and stems from the use of network science to probe the <a href="https://en.wikipedia.org/wiki/Community_structure">community structure</a> of social interactions.</p>
<p>I apply Clauset et al.‘s algorithm to the networks defined using the co-occurrence, Dice, Jaccard, Ochiai and overlap measures discussed in <a href="https://bldavies.com/blog/college-degrees-similarity-measures/">my previous post</a>, as well as the unweighted network in which fields are adjacent if at least one graduate studied them both.
The table below presents the number and size of communities detected in each network, and the corresponding maximised modularity values.</p>
<table>
<thead>
<tr>
<th align="center">Network</th>
<th align="center">Communities</th>
<th align="center">Fields</th>
<th align="center">Community sizes (millions of graduates)</th>
<th align="center">Modularity</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center">Co-occurrences</td>
<td align="center">6</td>
<td align="center">19–51</td>
<td align="center">6.4–19.3</td>
<td align="center">0.380</td>
</tr>
<tr>
<td align="center">Dice</td>
<td align="center">8</td>
<td align="center">11–50</td>
<td align="center">1.7–16.7</td>
<td align="center">0.456</td>
</tr>
<tr>
<td align="center">Jaccard</td>
<td align="center">8</td>
<td align="center">11–50</td>
<td align="center">1.7–19.8</td>
<td align="center">0.457</td>
</tr>
<tr>
<td align="center">Ochiai</td>
<td align="center">8</td>
<td align="center">13–40</td>
<td align="center">1.4–15.8</td>
<td align="center">0.433</td>
</tr>
<tr>
<td align="center">Overlap</td>
<td align="center">8</td>
<td align="center">9–30</td>
<td align="center">0.9–17.6</td>
<td align="center">0.423</td>
</tr>
<tr>
<td align="center">Unweighted</td>
<td align="center">3</td>
<td align="center">11–84</td>
<td align="center">3.5–42.2</td>
<td align="center">0.118</td>
</tr>
</tbody>
</table>
<p>Clauset et al.‘s algorithm detects eight communities in the Dice, Jaccard, Ochiai and overlap similarity networks, with each community containing at least nine fields and at most 50 fields.
The Jaccard measure delivers the greatest maximum modularity.
Ignoring edge weights makes within- and between-part connections harder to separate, leading to few communities being detected.</p>
<p>I identify the “representives” of each community as the fields with the largest ratios of mean within- and between-community similarities.
I transform these ratios by taking their natural logarithm in order to rein in the extreme values caused by near-zero divisors.
The following bar chart presents the representatives of each community detected in the Jaccard similarity network.</p>
<p><img src="figures/jaccard-representatives-1.svg" alt=""></p>
<p>Communities 2, 3, 4, 5, 7 and 8 appear to capture business, engineering, media, education, agriculture and biology-related fields.
Communities 1 and 6 are less clearly classifiable.</p>
<p>The table below presents the demographic compositions of the eight communities detected in the Jaccard similarity network.
Community 3 contains nearly 30% of degree fields but only about 20% of graduates, and is the most male-dominated among the eight communities detected.
Community 5 is the most female-dominated and has the highest mean age.
Educational attainment is lowest in communities 2 and 4, and highest in community 8.</p>
<table>
<thead>
<tr>
<th align="center">Community</th>
<th align="center">Fields</th>
<th align="center">Total graduates (millions)</th>
<th align="center">Mean graduate age</th>
<th align="center">% of graduates female</th>
<th align="center">% of graduates with post-graduate degree</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center">1</td>
<td align="center">28</td>
<td align="center">19.8</td>
<td align="center">48.4</td>
<td align="center">64.6</td>
<td align="center">39.7</td>
</tr>
<tr>
<td align="center">2</td>
<td align="center">11</td>
<td align="center">14.4</td>
<td align="center">47.8</td>
<td align="center">41.6</td>
<td align="center">26.5</td>
</tr>
<tr>
<td align="center">3</td>
<td align="center">50</td>
<td align="center">14.1</td>
<td align="center">47.6</td>
<td align="center">28.4</td>
<td align="center">42.9</td>
</tr>
<tr>
<td align="center">4</td>
<td align="center">18</td>
<td align="center">9.6</td>
<td align="center">45.1</td>
<td align="center">60.8</td>
<td align="center">27.6</td>
</tr>
<tr>
<td align="center">5</td>
<td align="center">16</td>
<td align="center">6.5</td>
<td align="center">54.7</td>
<td align="center">76.7</td>
<td align="center">49.0</td>
</tr>
<tr>
<td align="center">6</td>
<td align="center">18</td>
<td align="center">3.6</td>
<td align="center">43.4</td>
<td align="center">70.8</td>
<td align="center">33.6</td>
</tr>
<tr>
<td align="center">7</td>
<td align="center">17</td>
<td align="center">2.1</td>
<td align="center">47.3</td>
<td align="center">35.4</td>
<td align="center">33.8</td>
</tr>
<tr>
<td align="center">8</td>
<td align="center">15</td>
<td align="center">1.7</td>
<td align="center">45.6</td>
<td align="center">54.6</td>
<td align="center">51.7</td>
</tr>
<tr>
<td align="center">Overall</td>
<td align="center">173</td>
<td align="center">71.8</td>
<td align="center">47.9</td>
<td align="center">52.7</td>
<td align="center">36.7</td>
</tr>
</tbody>
</table>
College degrees in the US: Similarity measures
https://bldavies.com/blog/college-degrees-similarity-measures/
Sun, 14 Jul 2019 00:00:00 +0000https://bldavies.com/blog/college-degrees-similarity-measures/<p>In <a href="https://bldavies.com/blog/college-degrees-demographics/">my last post</a>, I used the <a href="https://census.gov/programs-surveys/acs/data/pums.html">2016 ACS PUMS</a> data to analyse how educational attainment and degree field choices vary between demographic groups.
I commented that the rates at which graduates pair fields together “provide insight into the intellectual connections between fields.”
This post compares different ways of estimating the strength of such connections.</p>
<h2 id="field-pair-co-occurrences">Field pair co-occurrences</h2>
<p>The <a href="https://github.com/bldavies/college-degrees">repository</a> for this post contains the files <code>observations.csv</code> and <code>fields.csv</code>, which I import as follows.</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">readr</span><span class="p">)</span>
<span class="n">data_url</span> <span class="o"><-</span> <span class="s">'https://raw.githubusercontent.com/bldavies/college-degrees/master/data/'</span>
<span class="n">observations</span> <span class="o"><-</span> <span class="nf">read_csv</span><span class="p">(</span><span class="nf">paste0</span><span class="p">(</span><span class="n">data_url</span><span class="p">,</span> <span class="s">'observations.csv'</span><span class="p">))</span>
<span class="n">fields</span> <span class="o"><-</span> <span class="nf">read_csv</span><span class="p">(</span><span class="nf">paste0</span><span class="p">(</span><span class="n">data_url</span><span class="p">,</span> <span class="s">'fields.csv'</span><span class="p">))</span>
</code></pre></div><p><code>observations</code> aggregates the sample weights in the PUMS data by age, sex, and degree level and fields.
I use these weights to construct a field pair co-occurrence matrix <code>C</code>:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">dplyr</span><span class="p">)</span>
<span class="n">C</span> <span class="o"><-</span> <span class="n">observations</span> <span class="o">%>%</span>
<span class="c1"># Aggregate sample weights by field pair</span>
<span class="nf">filter</span><span class="p">(</span><span class="n">level</span> <span class="o">></span> <span class="m">0</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">mutate</span><span class="p">(</span><span class="n">field2</span> <span class="o">=</span> <span class="nf">ifelse</span><span class="p">(</span><span class="nf">is.na</span><span class="p">(</span><span class="n">field2</span><span class="p">),</span> <span class="n">field1</span><span class="p">,</span> <span class="n">field2</span><span class="p">))</span> <span class="o">%>%</span>
<span class="nf">count</span><span class="p">(</span><span class="n">field1</span><span class="p">,</span> <span class="n">field2</span><span class="p">,</span> <span class="n">wt</span> <span class="o">=</span> <span class="n">weight</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">mutate</span><span class="p">(</span><span class="n">n</span> <span class="o">=</span> <span class="n">n</span> <span class="o">/</span> <span class="m">2</span><span class="p">)</span> <span class="o">%>%</span>
<span class="c1"># Identify weighted field-respondent pairs</span>
<span class="nf">mutate</span><span class="p">(</span><span class="n">respondent</span> <span class="o">=</span> <span class="nf">row_number</span><span class="p">())</span> <span class="o">%>%</span>
<span class="n">tidyr</span><span class="o">::</span><span class="nf">gather</span><span class="p">(</span><span class="n">key</span><span class="p">,</span> <span class="n">field</span><span class="p">,</span> <span class="n">field1</span><span class="p">,</span> <span class="n">field2</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">count</span><span class="p">(</span><span class="n">field</span><span class="p">,</span> <span class="n">respondent</span><span class="p">,</span> <span class="n">wt</span> <span class="o">=</span> <span class="n">n</span><span class="p">)</span> <span class="o">%>%</span>
<span class="c1"># Count field pair co-occurrences</span>
<span class="n">widyr</span><span class="o">::</span><span class="nf">pairwise_count</span><span class="p">(</span><span class="n">field</span><span class="p">,</span> <span class="n">respondent</span><span class="p">,</span> <span class="n">wt</span> <span class="o">=</span> <span class="n">n</span><span class="p">,</span> <span class="n">diag</span> <span class="o">=</span> <span class="kc">TRUE</span><span class="p">)</span> <span class="o">%>%</span>
<span class="c1"># Cast to matrix</span>
<span class="n">reshape2</span><span class="o">::</span><span class="nf">acast</span><span class="p">(</span><span class="n">item1</span> <span class="o">~</span> <span class="n">item2</span><span class="p">,</span> <span class="n">value.var</span> <span class="o">=</span> <span class="s">'n'</span><span class="p">,</span> <span class="n">fill</span> <span class="o">=</span> <span class="m">0</span><span class="p">)</span>
</code></pre></div><p>The diagonal elements of <code>C</code> estimate the total number of graduates with degrees in each field, while the off-diagonal elements estimate the number of graduates that chose each degree field pair.
For example, the elements of the leading submatrix</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">C[1</span><span class="o">:</span><span class="m">5</span><span class="p">,</span> <span class="m">1</span><span class="o">:</span><span class="m">5</span><span class="n">]</span>
</code></pre></div><pre><code>## 1100 1101 1102 1103 1104
## 1100 181555.0 128.0 0.0 163.5 0.0
## 1101 128.0 124979.0 647.5 971.0 196.5
## 1102 0.0 647.5 47352.5 521.5 0.0
## 1103 163.5 971.0 521.5 173097.0 261.5
## 1104 0.0 196.5 0.0 261.5 46670.0
</code></pre><p>provide estimates for the degree fields listed in the first five rows of <code>fields</code>:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">head</span><span class="p">(</span><span class="n">fields</span><span class="p">,</span> <span class="m">5</span><span class="p">)</span>
</code></pre></div><pre><code>## # A tibble: 5 x 2
## field field_desc
## <dbl> <chr>
## 1 1100 General Agriculture
## 2 1101 Agriculture Production And Management
## 3 1102 Agricultural Economics
## 4 1103 Animal Sciences
## 5 1104 Food Science
</code></pre><p>About 125,000 graduates hold degrees in Agriculture Production And Management, nearly 1,000 of which also hold degrees in Animal Sciences.
Agricultural Economics attracts about as many graduates as Food Science, but no respondents in the PUMS data reported studying both.</p>
<h2 id="similarity-measures">Similarity measures</h2>
<p>The diagonal elements of <code>C</code> estimate the “size,” in units of graduates, of each degree field.
The distribution of field sizes is positively skewed, with the largest field having more than 30 times the size of the smallest 50% of fields:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">summary</span><span class="p">(</span><span class="nf">diag</span><span class="p">(</span><span class="n">C</span><span class="p">))</span>
</code></pre></div><pre><code>## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 10696 54731 142088 415163 407828 4275723
</code></pre><p>Using the elements of <code>C</code> to measure the strength of connections between fields may lead to biased inferences by, for example, making large fields with proportionally few graduates in common appear to have stronger connections than small fields with proportionally many graduates in common.
One way to avoid such bias is to normalise each element <code>\(c_{ij}\)</code> of <code>C</code> by the corresponding field sizes <code>\(s_i=c_{ii}\)</code> and <code>\(s_j=c_{jj}\)</code>, thereby producing a scale-invariant “similarity” measure between pairs of degree fields.</p>
<p>Dividing <code>\(c_{ij}\)</code> by the arithmetic mean <code>\((s_i+s_j)/2\)</code> yields the <a href="https://en.wikipedia.org/wiki/S%C3%B8rensen%E2%80%93Dice_coefficient">Dice coefficient</a>
<code>$$\mathrm{Dice}(i,j) = \frac{2c_{ij}}{s_i+s_j},$$</code>
while dividing <code>\(c_{ij}\)</code> by the geometric mean <code>\(\sqrt{s_is_j}\)</code> yields the <a href="https://en.wikipedia.org/wiki/Cosine_similarity#Otsuka-Ochiai_coefficient">Ochiai coefficient</a>
<code>$$\mathrm{Ochiai}(i,j) = \frac{c_{ij}}{\sqrt{s_i\,s_j}}.$$</code>
The Dice coefficient can be used to define the <a href="https://en.wikipedia.org/wiki/Jaccard_index">Jaccard index</a>
<code>$$\begin{align} \mathrm{Jaccard}(i,j) &= \frac{c_{ij}}{s_i + s_j - c_{ij}} \\ &= \frac{\mathrm{Dice}(i,j)}{2 - \mathrm{Dice}(i,j)}, \end{align}$$</code>
which is conceptually related to the <a href="https://en.wikipedia.org/wiki/Overlap_coefficient">overlap coefficient</a>
<code>$$\mathrm{Overlap}(i,j) = \frac{c_{ij}}{\min(s_i, s_j)}$$</code>
in that both capture the relative size of set intersections.
These four similarity measures take values on the closed unit interval <code>\([0,1]\)</code>, with more “similar” fields achieving values closer to unity.
Indeed, one can show that
<code>$$\mathrm{Jaccard}(i,j) \le \mathrm{Dice}(i,j) \le \mathrm{Ochiai}(i,j) \le \mathrm{Overlap}(i,j) \le 1,$$</code>
with the two inner inequalities holding with equality if and only if <code>\(s_i=s_j\)</code>, and with all four inequalities holding with equality if and only if <code>\(s_i=s_j=c_{ij}\)</code>. Thus, two fields have unit similarity precisely when the sets of graduates with degrees in each field coincide.</p>
<p>I compute matrices of Dice, Jaccard, Ochiai and overlap similarities by defining</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">S</span> <span class="o"><-</span> <span class="nf">matrix</span><span class="p">(</span><span class="nf">rep</span><span class="p">(</span><span class="nf">diag</span><span class="p">(</span><span class="n">C</span><span class="p">),</span> <span class="nf">nrow</span><span class="p">(</span><span class="n">C</span><span class="p">)),</span> <span class="n">nrow</span> <span class="o">=</span> <span class="nf">nrow</span><span class="p">(</span><span class="n">C</span><span class="p">))</span>
</code></pre></div><p>and exploiting element-wise matrix operations:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">dice_mat</span> <span class="o"><-</span> <span class="m">2</span> <span class="o">*</span> <span class="n">C</span> <span class="o">/</span> <span class="p">(</span><span class="n">S</span> <span class="o">+</span> <span class="nf">t</span><span class="p">(</span><span class="n">S</span><span class="p">))</span>
<span class="n">jaccard_mat</span> <span class="o"><-</span> <span class="n">C</span> <span class="o">/</span> <span class="p">(</span><span class="n">S</span> <span class="o">+</span> <span class="nf">t</span><span class="p">(</span><span class="n">S</span><span class="p">)</span> <span class="o">-</span> <span class="n">C</span><span class="p">)</span>
<span class="n">ochiai_mat</span> <span class="o"><-</span> <span class="n">C</span> <span class="o">/</span> <span class="nf">sqrt</span><span class="p">(</span><span class="n">S</span> <span class="o">*</span> <span class="nf">t</span><span class="p">(</span><span class="n">S</span><span class="p">))</span>
<span class="n">overlap_mat</span> <span class="o"><-</span> <span class="n">C</span> <span class="o">/</span> <span class="nf">pmin</span><span class="p">(</span><span class="n">S</span><span class="p">,</span> <span class="nf">t</span><span class="p">(</span><span class="n">S</span><span class="p">))</span>
</code></pre></div><h2 id="ordinal-properties">Ordinal properties</h2>
<p>One way to compare similarity measures is to compare how they rank fields from most to least similar.
I do so using <a href="https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient">Kendall’s tau coefficient</a>, which captures the extent to which two rankings agree on the relative positions of ranked entities.
Kendall’s tau is defined as
<code>$$\tau(r_1,r_2) = \frac{2\times\text{Number of concordant pairs}}{\text{Number of pairs}} - 1,$$</code>
where <code>\(r_1\)</code> and <code>\(r_2\)</code> are ranking functions, and where a pair <code>\((x,y)\)</code> of entities is “concordant” if <code>\((r_1(x)-r_1(y))\)</code> and <code>\((r_2(x)-r_2(y))\)</code> share the same sign.
If every pair is corcordant then <code>\(\tau(r_1,r_2)=1\)</code> and if none are concordant then <code>\(\tau(r_1,r_2)=-1\)</code>.
The more <code>\(r_1\)</code> and <code>\(r_2\)</code> agree on the relative positions of ranked entities, the greater is the number of concordant pairs and hence the larger is <code>\(\tau(r_1,r_2)\)</code>.</p>
<p>Rearranging the definition of <code>\(\tau(r_1,r_2)\)</code> gives
<code>$$\Pr(\text{Pair is concordant}) = \frac{\tau(r_1, r_2) + 1}{2}.$$</code>
Thus, computing Kendall’s tau for the rankings produced by each similarity measure, and mapping the results linearly to the unit interval, allows me to estimate the rates of agreement between different measures.
I compute these rates as follows, excluding zero and unit similarities, and report the results as a matrix.</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">similarities</span> <span class="o"><-</span> <span class="nf">tibble</span><span class="p">(</span>
<span class="n">Dice</span> <span class="o">=</span> <span class="nf">as.vector</span><span class="p">(</span><span class="n">dice_mat</span><span class="p">),</span>
<span class="n">Jaccard</span> <span class="o">=</span> <span class="nf">as.vector</span><span class="p">(</span><span class="n">jaccard_mat</span><span class="p">),</span>
<span class="n">Ochiai</span> <span class="o">=</span> <span class="nf">as.vector</span><span class="p">(</span><span class="n">ochiai_mat</span><span class="p">),</span>
<span class="n">Overlap</span> <span class="o">=</span> <span class="nf">as.vector</span><span class="p">(</span><span class="n">overlap_mat</span><span class="p">),</span>
<span class="n">`Co-occ.`</span> <span class="o">=</span> <span class="nf">as.vector</span><span class="p">(</span><span class="n">C</span><span class="p">)</span> <span class="c1"># Include for comparison</span>
<span class="p">)</span> <span class="o">%>%</span>
<span class="nf">filter</span><span class="p">(</span><span class="nf">as.vector</span><span class="p">(</span><span class="nf">upper.tri</span><span class="p">(</span><span class="n">C</span><span class="p">)</span> <span class="o">&</span> <span class="n">C</span> <span class="o">></span> <span class="m">0</span><span class="p">))</span>
<span class="n">similarities</span> <span class="o">%>%</span>
<span class="nf">cor</span><span class="p">(</span><span class="n">method</span> <span class="o">=</span> <span class="s">'kendall'</span><span class="p">)</span> <span class="o">%>%</span>
<span class="p">{(</span><span class="n">. </span><span class="o">+</span> <span class="m">1</span><span class="p">)</span> <span class="o">/</span> <span class="m">2</span><span class="p">}</span> <span class="o">%>%</span> <span class="c1"># Map to unit interval</span>
<span class="nf">round</span><span class="p">(</span><span class="m">3</span><span class="p">)</span>
</code></pre></div><pre><code>## Dice Jaccard Ochiai Overlap Co-occ.
## Dice 1.000 1.000 0.914 0.778 0.778
## Jaccard 1.000 1.000 0.914 0.778 0.778
## Ochiai 0.914 0.914 1.000 0.864 0.798
## Overlap 0.778 0.778 0.864 1.000 0.765
## Co-occ. 0.778 0.778 0.798 0.765 1.000
</code></pre><p>The Dice and Jaccard measures produce identical rankings, and both reach about 91% and 78% agreement with the rankings produced using the Ochiai and overlap measures.
All four measures produce rankings that reach less than 80% agreement with the ranking produced using co-occurrence counts.</p>
<p>The following table presents the 10 most similar field pairs using the Dice and Jaccard measures, and those pairs’ ranks using the Ochiai, overlap and co-occurrence measures.</p>
<table>
<thead>
<tr>
<th align="left">Field 1</th>
<th align="left">Field 2</th>
<th align="right">Dice/Jacc. rank</th>
<th align="right">Ochiai rank</th>
<th align="right">Overlap rank</th>
<th align="right">Co-occ. rank</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Plant Science And Agronomy</td>
<td align="left">Soil Science</td>
<td align="right">1</td>
<td align="right">1</td>
<td align="right">1</td>
<td align="right">127</td>
</tr>
<tr>
<td align="left">Mathematics Teacher Education</td>
<td align="left">Science And Computer Teacher Education</td>
<td align="right">2</td>
<td align="right">3</td>
<td align="right">15</td>
<td align="right">66</td>
</tr>
<tr>
<td align="left">Biochemical Sciences</td>
<td align="left">Molecular Biology</td>
<td align="right">3</td>
<td align="right">2</td>
<td align="right">5</td>
<td align="right">56</td>
</tr>
<tr>
<td align="left">Ecology</td>
<td align="left">Miscellaneous Biology</td>
<td align="right">4</td>
<td align="right">4</td>
<td align="right">21</td>
<td align="right">146</td>
</tr>
<tr>
<td align="left">Mathematics</td>
<td align="left">Physics</td>
<td align="right">5</td>
<td align="right">5</td>
<td align="right">8</td>
<td align="right">11</td>
</tr>
<tr>
<td align="left">Political Science And Government</td>
<td align="left">History</td>
<td align="right">6</td>
<td align="right">8</td>
<td align="right">48</td>
<td align="right">2</td>
</tr>
<tr>
<td align="left">Journalism</td>
<td align="left">Mass Media</td>
<td align="right">7</td>
<td align="right">9</td>
<td align="right">30</td>
<td align="right">26</td>
</tr>
<tr>
<td align="left">Social Science Or History Teacher Education</td>
<td align="left">Language And Drama Education</td>
<td align="right">8</td>
<td align="right">10</td>
<td align="right">43</td>
<td align="right">53</td>
</tr>
<tr>
<td align="left">Accounting</td>
<td align="left">Finance</td>
<td align="right">9</td>
<td align="right">12</td>
<td align="right">32</td>
<td align="right">1</td>
</tr>
<tr>
<td align="left">Soil Science</td>
<td align="left">Geosciences</td>
<td align="right">10</td>
<td align="right">14</td>
<td align="right">53</td>
<td align="right">1048</td>
</tr>
</tbody>
</table>
<p>Plant Science And Agronomy and Soil Science top the rankings for all four similarity measures, despite being only the 127th most common field pair.
Biochemical Sciences and Molecular Biology, and Mathematics and Physics are the only other field pairs that rank in the top 10 most similar across all four measures.
Accounting and Finance, the most common field pair, ranks in the top 10 most similar fields using the Dice and Jaccard measures only.</p>
<h2 id="network-properties">Network properties</h2>
<p>Another way to compare similarity measures is to compare properties of the networks they define.
Each similarity matrix defines a network in which nodes represent degree fields and in which edges have weight equal to the similarity between incident nodes.</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">igraph</span><span class="p">)</span>
<span class="n">get_network</span> <span class="o"><-</span> <span class="nf">function</span><span class="p">(</span><span class="n">adj_mat</span><span class="p">)</span> <span class="p">{</span>
<span class="n">adj_mat</span> <span class="o">%>%</span>
<span class="nf">graph.adjacency</span><span class="p">(</span><span class="n">mode</span> <span class="o">=</span> <span class="s">'undirected'</span><span class="p">,</span> <span class="n">weighted</span> <span class="o">=</span> <span class="kc">TRUE</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">simplify</span><span class="p">()</span> <span class="c1"># Ignore self-similarities</span>
<span class="p">}</span>
<span class="n">coocc_net</span> <span class="o"><-</span> <span class="nf">get_network</span><span class="p">(</span><span class="n">C</span><span class="p">)</span>
<span class="n">dice_net</span> <span class="o"><-</span> <span class="nf">get_network</span><span class="p">(</span><span class="n">dice_mat</span><span class="p">)</span>
<span class="n">jaccard_net</span> <span class="o"><-</span> <span class="nf">get_network</span><span class="p">(</span><span class="n">jaccard_mat</span><span class="p">)</span>
<span class="n">ochiai_net</span> <span class="o"><-</span> <span class="nf">get_network</span><span class="p">(</span><span class="n">ochiai_mat</span><span class="p">)</span>
<span class="n">overlap_net</span> <span class="o"><-</span> <span class="nf">get_network</span><span class="p">(</span><span class="n">overlap_mat</span><span class="p">)</span>
</code></pre></div><p>I compare similarity measures by comparing fields’ <a href="https://en.wikipedia.org/wiki/Centrality">centralities</a> in each network.
I base my analysis on <a href="https://en.wikipedia.org/wiki/PageRank">PageRank</a> centrality for a variety of reasons:</p>
<ul>
<li>Unlike degree-based centrality measures (e.g., degree and strength), PageRank considers the “importance” of each neighbour as well as neighbourhood size;</li>
<li>Unlike distance-based centrality measures (e.g., betweenness and closeness), PageRank doesn’t require solving a bunch of <a href="https://en.wikipedia.org/wiki/Shortest_path_problem">shortest path problems</a>;</li>
<li>Unlike eigenvector centrality, PageRank doesn’t require the underlying network to be strongly connected.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup></li>
</ul>
<p>I store degree fields’ PageRank centralities as a tibble</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">pageranks</span> <span class="o"><-</span> <span class="nf">tibble</span><span class="p">(</span>
<span class="n">Dice</span> <span class="o">=</span> <span class="nf">page_rank</span><span class="p">(</span><span class="n">dice_net</span><span class="p">)</span><span class="o">$</span><span class="n">vector</span><span class="p">,</span>
<span class="n">Jaccard</span> <span class="o">=</span> <span class="nf">page_rank</span><span class="p">(</span><span class="n">jaccard_net</span><span class="p">)</span><span class="o">$</span><span class="n">vector</span><span class="p">,</span>
<span class="n">Ochiai</span> <span class="o">=</span> <span class="nf">page_rank</span><span class="p">(</span><span class="n">ochiai_net</span><span class="p">)</span><span class="o">$</span><span class="n">vector</span><span class="p">,</span>
<span class="n">Overlap</span> <span class="o">=</span> <span class="nf">page_rank</span><span class="p">(</span><span class="n">overlap_net</span><span class="p">)</span><span class="o">$</span><span class="n">vector</span><span class="p">,</span>
<span class="n">`Co-occ.`</span> <span class="o">=</span> <span class="nf">page_rank</span><span class="p">(</span><span class="n">coocc_net</span><span class="p">)</span><span class="o">$</span><span class="n">vector</span>
<span class="p">)</span>
</code></pre></div><p>and compute the corresponding matrix of Kendall’s tau coefficients, each mapped linearly to the unit interval:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">pageranks</span> <span class="o">%>%</span>
<span class="nf">cor</span><span class="p">(</span><span class="n">method</span> <span class="o">=</span> <span class="s">'kendall'</span><span class="p">)</span> <span class="o">%>%</span>
<span class="p">{(</span><span class="n">. </span><span class="o">+</span> <span class="m">1</span><span class="p">)</span> <span class="o">/</span> <span class="m">2</span><span class="p">}</span> <span class="o">%>%</span>
<span class="nf">round</span><span class="p">(</span><span class="m">3</span><span class="p">)</span>
</code></pre></div><pre><code>## Dice Jaccard Ochiai Overlap Co-occ.
## Dice 1.000 0.999 0.949 0.819 0.824
## Jaccard 0.999 1.000 0.949 0.819 0.823
## Ochiai 0.949 0.949 1.000 0.869 0.839
## Overlap 0.819 0.819 0.869 1.000 0.791
## Co-occ. 0.824 0.823 0.839 0.791 1.000
</code></pre><p>The rankings of fields from most to least PageRank-central under the Dice and Jaccard measures are almost identical, and reach just over 82% agreement with the ranking produced using co-occurrence counts.</p>
<p>The table below presents the 10 most PageRank-central fields using the Dice measure, and the corresponding ranks using the Jaccard, Ochiai, overlap and co-occurrence measures.
The column “Size rank” orders each field from largest to smallest.</p>
<table>
<thead>
<tr>
<th align="left">Field</th>
<th align="right">Dice rank</th>
<th align="right">Jaccard rank</th>
<th align="right">Ochiai rank</th>
<th align="right">Overlap rank</th>
<th align="right">Co-occ. rank</th>
<th align="right">Size rank</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">French German Latin And Other Common Foreign Language Studies</td>
<td align="right">1</td>
<td align="right">1</td>
<td align="right">1</td>
<td align="right">9</td>
<td align="right">15</td>
<td align="right">35</td>
</tr>
<tr>
<td align="left">Mathematics</td>
<td align="right">2</td>
<td align="right">2</td>
<td align="right">2</td>
<td align="right">6</td>
<td align="right">10</td>
<td align="right">22</td>
</tr>
<tr>
<td align="left">Political Science And Government</td>
<td align="right">3</td>
<td align="right">3</td>
<td align="right">3</td>
<td align="right">5</td>
<td align="right">5</td>
<td align="right">10</td>
</tr>
<tr>
<td align="left">Mass Media</td>
<td align="right">4</td>
<td align="right">5</td>
<td align="right">11</td>
<td align="right">23</td>
<td align="right">28</td>
<td align="right">50</td>
</tr>
<tr>
<td align="left">Molecular Biology</td>
<td align="right">5</td>
<td align="right">4</td>
<td align="right">13</td>
<td align="right">26</td>
<td align="right">53</td>
<td align="right">113</td>
</tr>
<tr>
<td align="left">English Language And Literature</td>
<td align="right">6</td>
<td align="right">6</td>
<td align="right">4</td>
<td align="right">4</td>
<td align="right">3</td>
<td align="right">9</td>
</tr>
<tr>
<td align="left">History</td>
<td align="right">7</td>
<td align="right">7</td>
<td align="right">9</td>
<td align="right">10</td>
<td align="right">9</td>
<td align="right">15</td>
</tr>
<tr>
<td align="left">Economics</td>
<td align="right">8</td>
<td align="right">8</td>
<td align="right">7</td>
<td align="right">7</td>
<td align="right">8</td>
<td align="right">14</td>
</tr>
<tr>
<td align="left">Psychology</td>
<td align="right">9</td>
<td align="right">9</td>
<td align="right">5</td>
<td align="right">3</td>
<td align="right">1</td>
<td align="right">3</td>
</tr>
<tr>
<td align="left">Sociology</td>
<td align="right">10</td>
<td align="right">10</td>
<td align="right">10</td>
<td align="right">13</td>
<td align="right">12</td>
<td align="right">19</td>
</tr>
</tbody>
</table>
<p>Languages, Mathematics, and Political Science And Government are the most PageRank-central fields under the Dice, Jaccard and Ochiai measures.
The Ochiai and overlap measures rank Mass Media and Molecular Biology relatively low on PageRank centrality, possibly due to those fields’ relatively small size.
The PageRank centralities produced using co-occurrence counts appear to correlate positively with field size, consistent with my worry that such counts may bias the measurement of intellectual connectedness in favour of larger fields.</p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>Ryan Tibshirani <a href="http://www.stat.cmu.edu/~ryantibs/datamining/lectures/03-pr.pdf">provides excellent notes</a> on how PageRank handles disconnected components and “dangling” nodes. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
College degrees in the US: Demographics
https://bldavies.com/blog/college-degrees-demographics/
Mon, 01 Jul 2019 00:00:00 +0000https://bldavies.com/blog/college-degrees-demographics/<p>Each year, the US Census Bureau <a href="https://www.census.gov/programs-surveys/acs/data/pums.html">publishes</a> a set of Public Use Microdata Sample (PUMS) files containing responses to the American Community Survey (ACS).
In this post, I use the 2016 ACS PUMS data to explore the variation in educational attainment and degree field choices between demographic groups.
The source data are available on <a href="https://github.com/bldavies/college-degrees/">GitHub</a>.</p>
<h2 id="educational-attainment">Educational attainment</h2>
<p>The table below reports educational attainment rates for each sex, pooled across all ages and degree fields.
Overall, a randomly selected female is more likely to have a college degree than a randomly selected male.
However, fewer females pursue doctoral degrees than males; male graduates are about 1.4 times more likely to have a doctorate than female graduates.</p>
<table>
<thead>
<tr>
<th align="left">Degree level</th>
<th align="right">% of females</th>
<th align="right">% of males</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">No college degree</td>
<td align="right">76.95</td>
<td align="right">78.62</td>
</tr>
<tr>
<td align="left">Bachelor’s degree</td>
<td align="right">14.66</td>
<td align="right">13.46</td>
</tr>
<tr>
<td align="left">Professional or Master’s degree</td>
<td align="right">7.64</td>
<td align="right">6.82</td>
</tr>
<tr>
<td align="left">Doctoral degree</td>
<td align="right">0.75</td>
<td align="right">1.10</td>
</tr>
</tbody>
</table>
<p>Pooling across all ages masks variation in educational attainment rates between age groups.
I present this variation in the line chart below, which compares educational attainment by age and sex.
The chart presents mean age group shares over a rolling five-year window, muting some of the noise in attainment rates caused by random fluctuations between consecutive years of age.</p>
<p><img src="figures/attainment-line-1.svg" alt=""></p>
<p>Young females have higher educational attainment rates than young males, but the decline in such rates with age is steeper among females than males.
Both sexes experience a spike in attainment between the ages of 60 and 70, corresponding to graduation dates during the late 1960s and early 1970s.
This spike could be due to the <a href="https://en.wikipedia.org/wiki/Higher_Education_Act_of_1965">Higher Education Act of 1965</a>, which “strengthen[ed] the educational resources of [US] colleges and universities” and “provide[d] financial assistance for students in post-secondary and higher education.”
The spike is most apparent among males.</p>
<p>Differences in educational attainment could reflect differences in degree field choices.
For example, to the extent that (i) there are more male science graduates than female science graduates, and (ii) science graduates tend to pursue doctoral degrees more often than non-science graduates, we would expect to see more doctorates among males than females.
If field selection is the only source of differences in educational attainment then there should be no difference in the within-field shares of male and female graduates with post-graduate degrees.
I compare such shares in the scatterplots below, in which points correspond to degree fields and have radii proportional to the number of graduates in each field.</p>
<p><img src="figures/attainment-scatter-1.svg" alt=""></p>
<p>The gap between the OLS fitted lines and 45-degree reference lines imply that, on average, male graduates are more likely to hold post-graduate degrees than female graduates in the same field.
This discrepancy appears to be larger for doctorates than for other post-graduate degrees.</p>
<h2 id="degree-fields">Degree fields</h2>
<p>The bar chart below plots the eight most common degree fields among male and female graduates.
Both business and accounting rank among the most common fields for graduates of each sex.
Nursing and education are more common among females, while computer science and engineering are more common among males.</p>
<p><img src="figures/fields-bar-1.svg" alt=""></p>
<p>The frequency at which people graduate with degrees in different fields may vary over time due to changes in social preferences or labour market conditions.
The line chart below plots the shares of graduates who studied electrical engineering or psychology, statified by age and sex.
The chart presents mean age group shares over a rolling five-year window.</p>
<p><img src="figures/fields-line-1.svg" alt=""></p>
<p>The trough in male electrical engineering graduates and spike in psychology graduates between the ages of 60 and 70 both coincide with the spike in educational attainment following the Higher Education Act of 1965.
The Act may have encouraged males to substitute from electrical engineering (or from not studying) to psychology by changing the relative benefits and costs of becoming qualified in each field.
For example, increasing access to federal loans may have encouraged students to pursue degrees with less certain job prospects by delaying the private burden of paying tuition.</p>
<p>The PUMS data report up to two degree fields for each respondent, allowing me to estimate the frequency of field pairings within the US population.
For example, the bar chart below shows the fields most frequently paired with economics and mathematics among graduates of each sex.
Males economics graduates appear to make similar pairing choices to female economics graduates.
Males pair mathematics with physics about as often as with computer science, while females do so only about half as often.</p>
<p><img src="figures/pairs-1.svg" alt=""></p>
<p>Field pair frequencies provide insight into the intellectual connections between fields.
Such connections may reflect fields using similar techniques (e.g., economics and finance) or providing complementary skills (e.g., mathematics and computer science).
I explore those connections <a href="https://bldavies.com/blog/college-degrees-similarity-measures/">here</a> and <a href="https://bldavies.com/blog/college-degrees-community-detection/">here</a>.</p>
Reading the ministerial diaries
https://bldavies.com/blog/reading-ministerial-diaries/
Wed, 12 Jun 2019 00:00:00 +0000https://bldavies.com/blog/reading-ministerial-diaries/<p>In December 2018, the New Zealand Government <a href="https://www.beehive.govt.nz/release/government-proactively-release-ministerial-diaries">announced</a> that its ministers “will for the first time release details of their internal and external meetings.”
The Government has since published these “ministerial diaries” as <a href="https://www.beehive.govt.nz/search?f%5B0%5D=content_type_facet%3Aministerial_diary&f%5B1%5D=government_facet%3A6203&f%5B2%5D=ministers%3A6205">a series of PDFs</a>.
In this post, I analyse the ministerial diary of <a href="https://www.beehive.govt.nz/minister/hon-david-parker">David Parker</a>, a <a href="https://www.odt.co.nz/news/election-2017/parker-emerges-pivotal-cabinet-minister">“pivotal cabinet minister”</a> who wears a range of politically and economically significant hats:</p>
<ul>
<li>Attorney-General;</li>
<li>Minister of Economic Development;</li>
<li>Minister for the Environment;</li>
<li>Minister of Trade and Export Growth;</li>
<li>Associate Minister of Finance.</li>
</ul>
<p>These roles, coupled with his scheduled activities for the 2018 calendar year being available in <a href="https://www.beehive.govt.nz/sites/default/files/2019-05/October%202017%20-%20December%202018_0.pdf">a single, consistently formatted table</a>, make Minister Parker’s diary (hereafter “the diary”) an interesting and relatively painless document to analyse.</p>
<h2 id="parsing-the-data">Parsing the data</h2>
<p>I read the diary into R using the <code>pdf_data</code> function from <a href="https://cran.r-project.org/package=pdftools"><code>pdftools</code></a>:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">pdftools</span><span class="p">)</span>
<span class="n">path</span> <span class="o"><-</span> <span class="s">"https://www.beehive.govt.nz/sites/default/files/2019-05/October%202017%20-%20December%202018_0.pdf"</span>
<span class="n">pages</span> <span class="o"><-</span> <span class="nf">pdf_data</span><span class="p">(</span><span class="n">path</span><span class="p">)</span>
</code></pre></div><p><code>pdf_data</code> scans each page for distinct words, encloses these words in <a href="https://en.wikipedia.org/wiki/Minimum_bounding_box">bounding boxes</a>, and stores the coordinates and content of each box as a list of tibbles.
For example, the diary’s first page contains the following data:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">dplyr</span><span class="p">)</span>
<span class="n">pages[[1]]</span>
</code></pre></div><pre><code>## # A tibble: 336 x 6
## width height x y space text
## <int> <int> <int> <int> <lgl> <chr>
## 1 46 20 72 75 TRUE David
## 2 52 20 122 75 TRUE Parker
## 3 42 20 179 75 TRUE Diary
## 4 77 20 226 75 FALSE Summary
## 5 11 11 72 102 TRUE 26
## 6 36 11 85 102 TRUE October
## 7 22 11 124 102 TRUE 2017
## 8 3 11 149 102 TRUE -
## 9 11 11 155 102 TRUE 31
## 10 46 11 168 102 TRUE December
## # … with 326 more rows
</code></pre><p>The <code>x</code> and <code>y</code> columns provide the horizontal and vertical displacement, in pixels, of each bounding box from the top-left corner of the page.
The left-most boxes sit 72 pixels from the left page boundary, allowing me to identify table rows by the cumulative number of boxes for which <code>x</code> equals 72.</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">pages[[1]]</span> <span class="o">%>%</span>
<span class="nf">arrange</span><span class="p">(</span><span class="n">y</span><span class="p">,</span> <span class="n">x</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">mutate</span><span class="p">(</span><span class="n">row</span> <span class="o">=</span> <span class="nf">cumsum</span><span class="p">(</span><span class="n">x</span> <span class="o">==</span> <span class="m">72</span><span class="p">))</span> <span class="o">%>%</span>
<span class="nf">filter</span><span class="p">(</span><span class="nf">cumsum</span><span class="p">(</span><span class="n">x</span> <span class="o">==</span> <span class="m">72</span> <span class="o">&</span> <span class="n">text</span> <span class="o">==</span> <span class="s">"Date"</span><span class="p">)</span> <span class="o">></span> <span class="m">0</span><span class="p">)</span> <span class="c1"># Remove preamble</span>
</code></pre></div><pre><code>## # A tibble: 91 x 7
## width height x y space text row
## <int> <int> <int> <int> <lgl> <chr> <int>
## 1 21 11 72 355 FALSE Date 14
## 2 46 11 149 355 TRUE Scheduled 14
## 3 22 11 198 355 FALSE Time 14
## 4 37 11 235 355 FALSE Meeting 14
## 5 38 11 390 355 FALSE Location 14
## 6 21 11 504 355 FALSE With 14
## 7 39 11 630 355 FALSE Portfolio 14
## 8 53 11 72 382 FALSE 26/10/2017 15
## 9 25 11 149 382 TRUE 11:00 15
## 10 3 11 177 382 TRUE - 15
## # … with 81 more rows
</code></pre><p>The <code>x</code> values for which <code>row</code> equals 14 provide the left alignment points for the text in each of the diary’s six columns.
These points remain unchanged across all 84 pages, allowing me to identify rows and columns throughout the diary within a single pipe:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">tidyr</span><span class="p">)</span>
<span class="c1"># Define column names and left alignment points</span>
<span class="n">columns</span> <span class="o"><-</span> <span class="nf">tibble</span><span class="p">(</span>
<span class="n">left_x</span> <span class="o">=</span> <span class="nf">c</span><span class="p">(</span><span class="m">72</span><span class="p">,</span> <span class="m">149</span><span class="p">,</span> <span class="m">235</span><span class="p">,</span> <span class="m">390</span><span class="p">,</span> <span class="m">504</span><span class="p">,</span> <span class="m">630</span><span class="p">),</span>
<span class="n">name</span> <span class="o">=</span> <span class="nf">c</span><span class="p">(</span><span class="s">"date"</span><span class="p">,</span> <span class="s">"scheduled_time"</span><span class="p">,</span> <span class="s">"meeting"</span><span class="p">,</span> <span class="s">"location"</span><span class="p">,</span> <span class="s">"with"</span><span class="p">,</span> <span class="s">"portfolio"</span><span class="p">)</span>
<span class="p">)</span>
<span class="c1"># Identify page numbers</span>
<span class="nf">for </span><span class="p">(</span><span class="n">i</span> <span class="n">in</span> <span class="m">1</span> <span class="o">:</span> <span class="nf">length</span><span class="p">(</span><span class="n">pages</span><span class="p">))</span> <span class="n">pages[[i]]</span><span class="o">$</span><span class="n">page</span> <span class="o"><-</span> <span class="n">i</span>
<span class="c1"># Process data</span>
<span class="n">diary</span> <span class="o"><-</span> <span class="nf">bind_rows</span><span class="p">(</span><span class="n">pages</span><span class="p">)</span> <span class="o">%>%</span>
<span class="c1"># Identify table rows</span>
<span class="nf">arrange</span><span class="p">(</span><span class="n">page</span><span class="p">,</span> <span class="n">y</span><span class="p">,</span> <span class="n">x</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">mutate</span><span class="p">(</span><span class="n">row</span> <span class="o">=</span> <span class="nf">cumsum</span><span class="p">(</span><span class="n">x</span> <span class="o">==</span> <span class="n">columns</span><span class="o">$</span><span class="n">left_x[1]</span><span class="p">))</span> <span class="o">%>%</span>
<span class="nf">filter</span><span class="p">(</span><span class="nf">cumsum</span><span class="p">(</span><span class="n">x</span> <span class="o">==</span> <span class="n">columns</span><span class="o">$</span><span class="n">left_x[1]</span> <span class="o">&</span> <span class="n">text</span> <span class="o">==</span> <span class="s">"Date"</span><span class="p">)</span> <span class="o">==</span> <span class="m">1</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">filter</span><span class="p">(</span><span class="n">row</span> <span class="o">></span> <span class="nf">min</span><span class="p">(</span><span class="n">row</span><span class="p">))</span> <span class="o">%>%</span> <span class="c1"># Remove header row</span>
<span class="c1"># Identify table columns</span>
<span class="nf">mutate</span><span class="p">(</span><span class="n">column</span> <span class="o">=</span> <span class="nf">sapply</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="nf">function</span><span class="p">(</span><span class="n">x</span><span class="p">){</span><span class="nf">max</span><span class="p">(</span><span class="nf">which</span><span class="p">(</span><span class="n">columns</span><span class="o">$</span><span class="n">left_x</span> <span class="o"><=</span> <span class="n">x</span><span class="p">))}),</span>
<span class="n">column</span> <span class="o">=</span> <span class="n">columns</span><span class="o">$</span><span class="n">name[column]</span><span class="p">)</span> <span class="o">%>%</span>
<span class="c1"># Concatenate text within table cells</span>
<span class="nf">group_by</span><span class="p">(</span><span class="n">row</span><span class="p">,</span> <span class="n">column</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">summarise</span><span class="p">(</span><span class="n">text</span> <span class="o">=</span> <span class="nf">paste</span><span class="p">(</span><span class="n">text</span><span class="p">,</span> <span class="n">collapse</span> <span class="o">=</span> <span class="s">" "</span><span class="p">))</span> <span class="o">%>%</span>
<span class="nf">ungroup</span><span class="p">()</span> <span class="o">%>%</span>
<span class="c1"># Clean data</span>
<span class="nf">clean_data</span><span class="p">()</span> <span class="o">%>%</span>
<span class="c1"># Convert to wide format</span>
<span class="nf">mutate</span><span class="p">(</span><span class="n">column</span> <span class="o">=</span> <span class="nf">factor</span><span class="p">(</span><span class="n">column</span><span class="p">,</span> <span class="n">levels</span> <span class="o">=</span> <span class="n">columns</span><span class="o">$</span><span class="n">name</span><span class="p">))</span> <span class="o">%>%</span>
<span class="nf">spread</span><span class="p">(</span><span class="n">column</span><span class="p">,</span> <span class="n">text</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">select</span><span class="p">(</span><span class="o">-</span><span class="n">row</span><span class="p">)</span>
</code></pre></div><p>I define the <code>clean_data</code> function in <a href="#appendix">the appendix</a> below.</p>
<p>The resulting tibble <code>diary</code> contains 1,553 rows, each of which describes a unique entry scheduled between October 2017 and December 2018.
I select entries scheduled during the 2018 calendar year:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="p">(</span><span class="n">data</span> <span class="o"><-</span> <span class="nf">filter</span><span class="p">(</span><span class="n">diary</span><span class="p">,</span> <span class="nf">grepl</span><span class="p">(</span><span class="s">"2018"</span><span class="p">,</span> <span class="n">date</span><span class="p">)))</span>
</code></pre></div><pre><code>## # A tibble: 1,347 x 6
## date scheduled_time meeting location with portfolio
## <chr> <chr> <chr> <chr> <chr> <chr>
## 1 15/01/… 10:00 - 11:00 Meeting with Fi… Beehive Treasury of… Associate F…
## 2 15/01/… 14:00 - 14:30 Meeting with MF… Beehive MFAT offici… Trade and E…
## 3 15/01/… 15:00 - 15:30 Meeting with MB… Beehive MBIE offici… Economic De…
## 4 16/01/… 09:30 - 10:15 Meeting with En… Selwyn Environment… Environment
## 5 16/01/… 10:40 - 11:40 Meeting with Ng… Springston Ngai Tahu r… Environment
## 6 16/01/… 12:00 - 12:30 Meeting with fa… Canterbury Farm owners… Environment
## 7 16/01/… 12:40 - 13:40 Working Lunch w… Canterbury Te Waihora … Environment
## 8 16/01/… 13:50 - 14:45 Meeting with fa… Leeston Farm owners… Environment
## 9 16/01/… 16:30 - 17:30 Meeting with Sy… Middleton,… Syft Techon… Economic De…
## 10 17/01/… 09:30 - 10:00 Meeting with Ca… Beehive Cabinet Off… All
## # … with 1,337 more rows
</code></pre><p>According to <a href="https://www.beehive.govt.nz/ministerial-diaries-full-disclaimer">the official disclaimer</a>, the diary excludes personal and party political meetings, along with details published elsewhere such as time spent in the House of Representatives.
Moreover, some details are withheld under various sections of <a href="http://legislation.govt.nz/act/public/1982/0156/latest/DLM64785.html">the Official Information Act</a>.
I assume that the remaining entries provide a representative sample of Minister Parker’s ministerial activities.</p>
<h2 id="analysing-word-frequencies">Analysing word frequencies</h2>
<p>I analyse the frequency of words used in the <code>with</code> column of <code>data</code>.
These frequencies provide insight into Minister Parker’s interactions with different organisations.
I use the <code>unnest_tokens</code> function from <a href="https://cran.r-project.org/package=tidytext"><code>tidytext</code></a> to identify unique words and the <code>count</code> function from <code>dplyr</code> to count word frequencies.</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">tidytext</span><span class="p">)</span>
<span class="n">data</span> <span class="o">%>%</span>
<span class="nf">unnest_tokens</span><span class="p">(</span><span class="n">word</span><span class="p">,</span> <span class="n">with</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">anti_join</span><span class="p">(</span><span class="nf">get_stopwords</span><span class="p">())</span> <span class="o">%>%</span> <span class="c1"># Remove stop words</span>
<span class="nf">count</span><span class="p">(</span><span class="n">word</span><span class="p">,</span> <span class="n">sort</span> <span class="o">=</span> <span class="kc">TRUE</span><span class="p">)</span>
</code></pre></div><pre><code>## # A tibble: 674 x 2
## word n
## <chr> <int>
## 1 attending 290
## 2 officials 272
## 3 minister 198
## 4 ministers 108
## 5 mfe 89
## 6 mbie 82
## 7 jones 76
## 8 sage 58
## 9 twyford 56
## 10 mfat 53
## # … with 664 more rows
</code></pre><p>The most frequent word, “attending,” reflects cabinet meetings, media briefings and other general ministerial duties.
The next most frequent word, “officials,” reflects Minister Parker’s meetings with the Ministry for the Environment (MfE), the Ministry of Business, Innovation and Employment (MBIE), and the Ministry of Foreign Affairs and Trade (MFAT), along with other government departments.
Both “minister” and “ministers” reflect meetings with Ministers <a href="https://www.beehive.govt.nz/minister/hon-shane-jones">Jones</a>, <a href="https://www.beehive.govt.nz/minister/hon-eugenie-sage">Sage</a>, <a href="https://www.beehive.govt.nz/minister/hon-phil-twyford">Twyford</a> and others.</p>
<h3 id="computing-tf-idf-scores">Computing tf-idf scores</h3>
<p>Counting word frequencies across all portfolios masks portfolio-specific interactions.
I infer such interactions from the <a href="https://www.tidytextmining.com/tfidf.html"><em>term frequency-inverse document frequency</em></a> (tf-idf) scores of word-portfolio pairs.
I identify these pairs as follows.</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">word_portfolio_pairs</span> <span class="o"><-</span> <span class="n">data</span> <span class="o">%>%</span>
<span class="c1"># Disambiguate portfolio names</span>
<span class="nf">mutate</span><span class="p">(</span><span class="n">portfolio</span> <span class="o">=</span> <span class="nf">gsub</span><span class="p">(</span><span class="s">"Att.*?ral|AG"</span><span class="p">,</span> <span class="s">"Attorney-General"</span><span class="p">,</span> <span class="n">portfolio</span><span class="p">))</span> <span class="o">%>%</span>
<span class="c1"># Split entries with multiple porfolios</span>
<span class="nf">mutate</span><span class="p">(</span><span class="n">portfolio</span> <span class="o">=</span> <span class="nf">gsub</span><span class="p">(</span><span class="s">"[^[:alpha:] -]"</span><span class="p">,</span> <span class="s">"&"</span><span class="p">,</span> <span class="n">portfolio</span><span class="p">),</span>
<span class="n">portfolio</span> <span class="o">=</span> <span class="nf">strsplit</span><span class="p">(</span><span class="n">portfolio</span><span class="p">,</span> <span class="s">"&"</span><span class="p">))</span> <span class="o">%>%</span>
<span class="nf">unnest</span><span class="p">()</span> <span class="o">%>%</span>
<span class="nf">mutate</span><span class="p">(</span><span class="n">portfolio</span> <span class="o">=</span> <span class="nf">trimws</span><span class="p">(</span><span class="n">portfolio</span><span class="p">))</span> <span class="o">%>%</span>
<span class="c1"># Identify word-portfolio pairs</span>
<span class="nf">filter</span><span class="p">(</span><span class="o">!</span><span class="nf">is.na</span><span class="p">(</span><span class="n">portfolio</span><span class="p">))</span> <span class="o">%>%</span>
<span class="nf">unnest_tokens</span><span class="p">(</span><span class="n">word</span><span class="p">,</span> <span class="n">with</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">select</span><span class="p">(</span><span class="n">word</span><span class="p">,</span> <span class="n">portfolio</span><span class="p">)</span>
</code></pre></div><p>tf-idf scores measure the “importance” of words in each document in a corpus.
The <em>term frequency</em></p>
<p><code>$$\mathrm{tf}(w, d)=\frac{\text{Number of occurrences of word}\ w\ \text{in document}\ d}{\text{Number of words in document}\ d}$$</code></p>
<p>measures the rate at which word <code>\(w\)</code> occurs in a document <code>\(d\)</code>, while the <em>inverse document frequency</em></p>
<p><code>$$\mathrm{idf}(w) = -\ln\left(\frac{\text{Number of documents containing word}\ w}{\text{Number of documents}}\right)$$</code></p>
<p>provides a normalisation factor that penalises ubiquitous words.
The tf-idf score</p>
<p><code>$$\text{tf-idf}(w,d) = \mathrm{tf}(w, d) \cdot \mathrm{idf}(w)$$</code></p>
<p>thus measures the prevalence of word <code>\(w\)</code> in document <code>\(d\)</code>, normalised by that word’s prevalence in other documents.
I interpret the set of entries associated with each portfolio as a document and use the <code>bind_tf_idf</code> function from <code>tidytext</code> to compute word-portfolio tf-idf scores:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">word_portfolio_pairs</span> <span class="o">%>%</span>
<span class="nf">count</span><span class="p">(</span><span class="n">word</span><span class="p">,</span> <span class="n">portfolio</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">bind_tf_idf</span><span class="p">(</span><span class="n">word</span><span class="p">,</span> <span class="n">portfolio</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span>
</code></pre></div><pre><code>## # A tibble: 1,066 x 6
## word portfolio n tf idf tf_idf
## <chr> <chr> <int> <dbl> <dbl> <dbl>
## 1 a Associate Finance 1 0.00285 0.693 0.00197
## 2 a Environment 1 0.000739 0.693 0.000512
## 3 a Trade and Export Growth 2 0.00277 0.693 0.00192
## 4 accelerator Economic Development 1 0.00137 1.79 0.00245
## 5 acting Attorney-General 1 0.00215 1.79 0.00385
## 6 action Trade and Export Growth 1 0.00139 1.79 0.00248
## 7 adrian Economic Development 1 0.00137 1.79 0.00245
## 8 advisory Economic Development 2 0.00273 0.693 0.00189
## 9 advisory Environment 1 0.000739 0.693 0.000512
## 10 advisory Trade and Export Growth 1 0.00139 0.693 0.000960
## # … with 1,056 more rows
</code></pre><p>The <code>idf</code> column identifies both language-specific stop words (e.g., “a”) and context-specific stop words (e.g., “advisory”) that are common across portfolios.</p>
<p>The chart below presents the highest tf-idf words for each portfolio.
These words reveal organisations (e.g., the Parliamentary Counsel Office) and individuals (e.g., <a href="https://ec.europa.eu/commission/commissioners/2014-2019/malmstrom_en">Cecilia Malmström</a>) that are missing from the diary-wide word frequencies computed above.</p>
<p><img src="figures/highest-tf-idf-1.svg" alt=""></p>
<p>The chart also reveals which interactions correspond to which portfolios.
For example, Minister Parker’s frequent interactions with MBIE officials appear to be most associated with the Economic Development portfolio, while his interactions with Minister Sage appear to involve both the Environment and Associate Finance portfolios.
(<a href="https://www.greens.org.nz/sites/default/files/Eugenie%20Sage%27s%20July-Sept%202018%20Diary.pdf">Minister Sage’s diary</a> suggests that such cross-portfolio interactions relate to the Overseas Investment Office, for which Ministers Parker and Sage are jointly responsible.)</p>
<h2 id="acknowledgements">Acknowledgements</h2>
<p><a href="https://ropensci.org/technotes/2018/12/14/pdftools-20/">The pdftools 2.0 release notes</a> helped me interpret <code>pdf_data</code>'s output.
<a href="https://juliasilge.com">Julia Silge</a> and <a href="http://varianceexplained.org">David Robinson</a>‘s book <a href="https://www.tidytextmining.com"><em>Text Mining with R</em></a> provided useful background reading, especially <a href="https://www.tidytextmining.com/tfidf.html">the chapter on tf-idf scores</a>.</p>
<h2 id="appendix">Appendix</h2>
<h3 id="source-code-for-clean_data">Source code for <code>clean_data()</code></h3>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">clean_data</span> <span class="o"><-</span> <span class="nf">function </span><span class="p">(</span><span class="n">df</span><span class="p">)</span> <span class="p">{</span>
<span class="n">df</span> <span class="o">%>%</span>
<span class="c1"># Replace non-ASCII characters with ASCII equivalents</span>
<span class="nf">mutate</span><span class="p">(</span><span class="n">text</span> <span class="o">=</span> <span class="nf">iconv</span><span class="p">(</span><span class="n">text</span><span class="p">,</span> <span class="s">""</span><span class="p">,</span> <span class="s">"ASCII"</span><span class="p">,</span> <span class="n">sub</span> <span class="o">=</span> <span class="s">"byte"</span><span class="p">),</span>
<span class="n">text</span> <span class="o">=</span> <span class="nf">gsub</span><span class="p">(</span><span class="s">"<c3><a7>"</span><span class="p">,</span> <span class="s">"c"</span><span class="p">,</span> <span class="n">text</span><span class="p">),</span>
<span class="n">text</span> <span class="o">=</span> <span class="nf">gsub</span><span class="p">(</span><span class="s">"<c3><a9>"</span><span class="p">,</span> <span class="s">"e"</span><span class="p">,</span> <span class="n">text</span><span class="p">),</span>
<span class="n">text</span> <span class="o">=</span> <span class="nf">gsub</span><span class="p">(</span><span class="s">"<c3><b1>"</span><span class="p">,</span> <span class="s">"n"</span><span class="p">,</span> <span class="n">text</span><span class="p">),</span>
<span class="n">text</span> <span class="o">=</span> <span class="nf">gsub</span><span class="p">(</span><span class="s">"<c4><81>"</span><span class="p">,</span> <span class="s">"a"</span><span class="p">,</span> <span class="n">text</span><span class="p">),</span>
<span class="n">text</span> <span class="o">=</span> <span class="nf">gsub</span><span class="p">(</span><span class="s">"<c5><ab>"</span><span class="p">,</span> <span class="s">"u"</span><span class="p">,</span> <span class="n">text</span><span class="p">),</span>
<span class="n">text</span> <span class="o">=</span> <span class="nf">gsub</span><span class="p">(</span><span class="s">"<e2><80><93>"</span><span class="p">,</span> <span class="s">"-"</span><span class="p">,</span> <span class="n">text</span><span class="p">),</span>
<span class="n">text</span> <span class="o">=</span> <span class="nf">gsub</span><span class="p">(</span><span class="s">"<e2><80><99>"</span><span class="p">,</span> <span class="s">"'"</span><span class="p">,</span> <span class="n">text</span><span class="p">),</span>
<span class="n">text</span> <span class="o">=</span> <span class="nf">gsub</span><span class="p">(</span><span class="s">"<e2><80><9c>|<e2><80><9d>"</span><span class="p">,</span> <span class="s">"\""</span><span class="p">,</span> <span class="n">text</span><span class="p">))</span> <span class="o">%>%</span>
<span class="c1"># Fix linebroken data ranges</span>
<span class="nf">spread</span><span class="p">(</span><span class="n">column</span><span class="p">,</span> <span class="n">text</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">mutate</span><span class="p">(</span><span class="n">split_date</span> <span class="o">=</span> <span class="nf">is.na</span><span class="p">(</span><span class="n">scheduled_time</span><span class="p">)</span> <span class="o">&</span> <span class="nf">grepl</span><span class="p">(</span><span class="s">"-"</span><span class="p">,</span> <span class="nf">paste</span><span class="p">(</span><span class="n">date</span><span class="p">,</span> <span class="nf">lag</span><span class="p">(</span><span class="n">date</span><span class="p">))),</span>
<span class="n">row</span> <span class="o">=</span> <span class="nf">cumsum</span><span class="p">(</span><span class="o">!</span><span class="n">split_date</span><span class="p">))</span> <span class="o">%>%</span>
<span class="nf">select</span><span class="p">(</span><span class="o">-</span><span class="n">split_date</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">gather</span><span class="p">(</span><span class="n">column</span><span class="p">,</span> <span class="n">text</span><span class="p">,</span> <span class="o">-</span><span class="n">row</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">group_by</span><span class="p">(</span><span class="n">row</span><span class="p">,</span> <span class="n">column</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">summarise</span><span class="p">(</span><span class="n">text</span> <span class="o">=</span> <span class="nf">gsub</span><span class="p">(</span><span class="s">"NA"</span><span class="p">,</span> <span class="s">""</span><span class="p">,</span> <span class="nf">paste</span><span class="p">(</span><span class="n">text</span><span class="p">,</span> <span class="n">collapse</span> <span class="o">=</span> <span class="s">" "</span><span class="p">)))</span> <span class="o">%>%</span>
<span class="nf">ungroup</span><span class="p">()</span> <span class="o">%>%</span>
<span class="nf">mutate</span><span class="p">(</span><span class="n">text</span> <span class="o">=</span> <span class="nf">trimws</span><span class="p">(</span><span class="n">text</span><span class="p">),</span>
<span class="n">text</span> <span class="o">=</span> <span class="nf">ifelse</span><span class="p">(</span><span class="n">text</span> <span class="o">==</span> <span class="s">""</span><span class="p">,</span> <span class="kc">NA</span><span class="p">,</span> <span class="n">text</span><span class="p">))</span> <span class="o">%>%</span>
<span class="c1"># Fix transcription errors</span>
<span class="nf">mutate</span><span class="p">(</span><span class="n">text</span> <span class="o">=</span> <span class="nf">gsub</span><span class="p">(</span><span class="s">"Minster"</span><span class="p">,</span> <span class="s">"Minister"</span><span class="p">,</span> <span class="n">text</span><span class="p">),</span>
<span class="n">text</span> <span class="o">=</span> <span class="nf">ifelse</span><span class="p">(</span><span class="n">column</span> <span class="o">==</span> <span class="s">"portfolio"</span> <span class="o">&</span> <span class="n">text</span> <span class="o">==</span> <span class="s">"Minister Little"</span><span class="p">,</span> <span class="s">"Attorney-General"</span><span class="p">,</span> <span class="n">text</span><span class="p">))</span>
<span class="p">}</span>
</code></pre></div>Relatedness, complexity and local growth
https://bldavies.com/blog/relatedness-complexity-local-growth/
Tue, 02 Apr 2019 00:00:00 +0000https://bldavies.com/blog/relatedness-complexity-local-growth/<p>I recently wrote an article for <a href="https://www.nzae.org.nz/blog-page/nzae-newsletters/"><em>Asymmetric Information</em></a> summarising <a href="https://motu.nz/our-work/urban-and-regional/regions/relatedness-complexity-and-local-growth/">my paper with Dave Maré</a> on the relatedness and complexity of economic activities in New Zealand.
The full text for that article is quoted below.</p>
<blockquote>
<h2 id="introduction">Introduction</h2>
<p>Current European regional policy encourages regions to build on their strengths by diversifying into activities that draw upon existing knowledge bases.
This “smart specialisation” approach encourages entrepreneurship, innovation and long-term growth by fostering local interactions between workers with complementary knowledge and skills.</p>
<p><a href="https://doi.org/10.1080/00343404.2018.1437900">Balland et al. (2018)</a> define a framework for analysing smart specialisation using the ideas of relatedness and complexity.
Expanding into activities that are related to existing specialisations carries low growth risk because local workers already possess the knowledge and skills needed to conduct those activities.
Expanding into complex activities delivers the highest expected economic returns because such activities “form the basis for long-run competitive advantage.”
Balland et al.‘s framework identifies low-risk, high-return development opportunities as locally under-represented activities with high local relatedness and high complexity.</p>
<p>We examine the contribution of relatedness and complexity to urban employment growth in New Zealand.
This allows us to evaluate the efficacy of implementing smart specialisation policies in New Zealand by identifying whether the associated mechanisms appear to influence employment dynamics.</p>
<h2 id="data-and-methods">Data and methods</h2>
<p>Our analysis uses historical New Zealand census data aligned to current industry, occupation and urban area codes.
We select 50 “cities” (urban areas) and 200 “activities” (industry-occupation pairs) with persistently high employment in census years 1981, 1991, 2001 and 2013.
Our selected activities span 61 industries and nine occupations.</p>
<p>We recognise activities as being “related” if they require similar inputs.
We infer such similarities from employee co-location patterns.
These patterns reveal firms’ shared preferences for using spatially heterogeneous resources, which encourage firms engaged in related activities to co-locate in order to benefit from agglomeration economies.</p>
<p>We measure activities’ relatedness using weighted correlations of local employment shares.
Our approach extends discrete measures used in previous studies by recognising variation in the extent of local specialisation and by adjusting for differences in employment data quality between geographic areas.</p>
<p>We recognise activities as being “complex” if they rely on specialised combinations of complementary inputs.
For example, consulting is more complex than lecturing because consultants need local clients while lecturers do not rely as much on other activities being present locally.</p>
<p>We define activity complexity using the second eigenvector of the row-standardised activity relatedness matrix.
Our approach generalises <a href="https://doi.org/10.1371/journal.pone.0047278">Calderelli et al.‘s (2012)</a> eigenvector approximation of <a href="https://doi.org/10.1073/pnas.0900943106">Hidalgo and Hausmann’s (2009)</a> Method of Reflections.
We use a similar approach, applied to the transpose of the city-activity employment matrix, to estimate city complexity.</p>
<h2 id="mapping-relatedness">Mapping relatedness</h2>
<p>We define an “activity space” that captures the network structure of activities based on our relatedness estimates.
We describe activity space by a weighted network in which nodes correspond to activities and in which edges have weight equal to the relatedness between pairs of activities.
The subnetwork induced by the 500 edges of largest weight is shown below, with nodes coloured by occupation.</p>
<p><img src="figures/activity-space.svg" alt=""></p>
<p>At the centre of our map is a tightly connected, nest-shaped cluster of low-skill occupations in the distributive services sector.
To the right of this cluster is a group of medium- to low-skill occupations in the construction, retail and healthcare sectors.
These activities are ubiquitous and appear together as local relative specialisations in smaller, less diverse cities.
In contrast, the lower wing of our network map comprises a cluster of high-skill occupations in the professional and information service sectors, which tend to concentrate in large cities and to have higher levels of complexity.</p>
<h2 id="do-relatedness-and-complexity-predict-employment-growth">Do relatedness and complexity predict employment growth?</h2>
<p>More complex activities grew faster during our period of study.
On average and holding local relatedness constant at its weighted mean value, a one standard deviation increase in activity complexity is associated with a 0.89 percentage point increase in local employment growth per year.
This effect rises to 0.98 percentage points when we control for city complexity.
More locally related activities experienced slower growth, especially in complex cities.</p>
<p>Balland et al.‘s (2018) framework suggests that complex activities with high local relatedness offer the strongest prospects for future growth.
If this were true then we would expect a strong positive coefficient on the interaction of local relatedness and activity complexity.
Our estimates show only a weak and insignificant interaction.</p>
<p>Relatedness appears to promote growth only in the largest and most complex cities.
This result is consistent with the idea that cities are dense networks of interacting activities: the benefits of such interaction are more apparent in larger cities, where workers and firms engaged in related activities interact more frequently.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Complex activities grew faster during our period of study, especially in complex cities.
However, this growth was not significantly stronger in cities more dense with related activities.
Overall, we do not identify strong effects of relatedness and complexity on growth in local activity employment.
It remains an open question whether the effects do not operate or whether New Zealand cities lack the scale for such operation.</p>
</blockquote>
<p>Further details are available in <a href="https://motu.nz/our-work/urban-and-regional/regions/relatedness-complexity-and-local-growth/">Motu Working Paper 19-01</a>.</p>
Accessing the Strava API with R
https://bldavies.com/blog/accessing-strava-api/
Sun, 06 Jan 2019 00:00:00 +0000https://bldavies.com/blog/accessing-strava-api/<p><a href="https://www.strava.com/">Strava</a> is an online platform for storing and sharing fitness data.
Strava <a href="https://developers.strava.com">provides an API</a> for accessing such data at the activity (e.g., run or cycle) level.
This post explains how I <a href="#setup-and-authentication">authenticate</a> with, and <a href="#extracting-the-data">extract data</a> from, the Strava API using R.
I implement my method in the R package <a href="https://github.com/bldavies/stravadata">stravadata</a>.</p>
<h2 id="setup-and-authentication">Setup and authentication</h2>
<p>Strava uses <a href="https://oauth.net/2/">OAuth 2.0</a> to authorise access to the API data.
The first step to becoming authorised is to register for access on <a href="https://www.strava.com/settings/api/">Strava’s API settings page</a>.
I put “localhost” in the “Authorization Callback Domain” field.
Upon completing the registration form, the page provides two important values: an integer client ID and an alpha-numeric client secret.
I store these values in <code>credentials.yaml</code>, which I structure as</p>
<div class="highlight"><pre class="chroma"><code class="language-yaml" data-lang="yaml"><span class="k">client_id</span><span class="p">:</span><span class="w"> </span>xxxxxxxxx<span class="w">
</span><span class="w"></span><span class="k">secret</span><span class="p">:</span><span class="w"> </span>xxxxxxxxx<span class="w">
</span></code></pre></div><p>and import into R using the <code>read_yaml</code> function from the <a href="https://cran.r-project.org/package=yaml"><code>yaml</code></a> package.</p>
<p>Next, I create an OAuth application for interacting with the API and an endpoint through which to send authentication requests.
I use the <code>oauth_app</code> and <code>oauth_endpoint</code> functions from <a href="https://cran.r-project.org/package=httr"><code>httr</code></a>:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">httr</span><span class="p">)</span>
<span class="n">app</span> <span class="o"><-</span> <span class="nf">oauth_app</span><span class="p">(</span><span class="s">"strava"</span><span class="p">,</span> <span class="n">credentials</span><span class="o">$</span><span class="n">client_id</span><span class="p">,</span> <span class="n">credentials</span><span class="o">$</span><span class="n">secret</span><span class="p">)</span>
<span class="n">endpoint</span> <span class="o"><-</span> <span class="nf">oauth_endpoint</span><span class="p">(</span>
<span class="n">request</span> <span class="o">=</span> <span class="kc">NULL</span><span class="p">,</span>
<span class="n">authorize</span> <span class="o">=</span> <span class="s">"https://www.strava.com/oauth/authorize"</span><span class="p">,</span>
<span class="n">access</span> <span class="o">=</span> <span class="s">"https://www.strava.com/oauth/token"</span>
<span class="p">)</span>
</code></pre></div><p>Finally, I create an OAuth access token to send the authentication request to my Strava account.
This token encapsulates the application and endpoint defined above.
Running<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup></p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">token</span> <span class="o"><-</span> <span class="nf">oauth2.0_token</span><span class="p">(</span><span class="n">endpoint</span><span class="p">,</span> <span class="n">app</span><span class="p">,</span> <span class="n">as_header</span> <span class="o">=</span> <span class="kc">FALSE</span><span class="p">,</span>
<span class="n">scope</span> <span class="o">=</span> <span class="s">"activity:read_all"</span><span class="p">)</span>
</code></pre></div><p>opens a browser window at a web page for accepting the authentication request.
Doing so redirects me to the callback domain (“localhost”) and prints a confirmation message:</p>
<blockquote>
<p>Authentication complete. Please close this page and return to R.</p>
</blockquote>
<h2 id="extracting-the-data">Extracting the data</h2>
<p>After authenticating with Strava, I use HTTP requests to extract activity data from the API.
The API returns multiple pages of data, each containing up to 200 activities.
I use a while loop to iterate over pages, using the <code>fromJSON</code> function from <a href="https://cran.r-project.org/package=jsonlite"><code>jsonlite</code></a> to parse the extracted data:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">jsonlite</span><span class="p">)</span>
<span class="n">df_list</span> <span class="o"><-</span> <span class="nf">list</span><span class="p">()</span>
<span class="n">i</span> <span class="o"><-</span> <span class="m">1</span>
<span class="n">done</span> <span class="o"><-</span> <span class="kc">FALSE</span>
<span class="nf">while </span><span class="p">(</span><span class="o">!</span><span class="n">done</span><span class="p">)</span> <span class="p">{</span>
<span class="n">req</span> <span class="o"><-</span> <span class="nf">GET</span><span class="p">(</span>
<span class="n">url</span> <span class="o">=</span> <span class="s">"https://www.strava.com/api/v3/athlete/activities"</span><span class="p">,</span>
<span class="n">config</span> <span class="o">=</span> <span class="n">token</span><span class="p">,</span>
<span class="n">query</span> <span class="o">=</span> <span class="nf">list</span><span class="p">(</span><span class="n">per_page</span> <span class="o">=</span> <span class="m">200</span><span class="p">,</span> <span class="n">page</span> <span class="o">=</span> <span class="n">i</span><span class="p">)</span>
<span class="p">)</span>
<span class="n">df_list[[i]]</span> <span class="o"><-</span> <span class="nf">fromJSON</span><span class="p">(</span><span class="nf">content</span><span class="p">(</span><span class="n">req</span><span class="p">,</span> <span class="n">as</span> <span class="o">=</span> <span class="s">"text"</span><span class="p">),</span> <span class="n">flatten</span> <span class="o">=</span> <span class="kc">TRUE</span><span class="p">)</span>
<span class="nf">if </span><span class="p">(</span><span class="nf">length</span><span class="p">(</span><span class="nf">content</span><span class="p">(</span><span class="n">req</span><span class="p">))</span> <span class="o"><</span> <span class="m">200</span><span class="p">)</span> <span class="p">{</span>
<span class="n">done</span> <span class="o"><-</span> <span class="kc">TRUE</span>
<span class="p">}</span> <span class="n">else</span> <span class="p">{</span>
<span class="n">i</span> <span class="o"><-</span> <span class="n">i</span> <span class="o">+</span> <span class="m">1</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre></div><p>Finally, I use the <code>rbind_pages</code> function from <code>jsonlite</code> to collate the activity data into a single data frame:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">df</span> <span class="o"><-</span> <span class="nf">rbind_pages</span><span class="p">(</span><span class="n">df_list</span><span class="p">)</span>
</code></pre></div><section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>Strava’s <a href="https://developers.strava.com/docs/oauth-updates/">OAuth update</a> in October 2019 made <code>scope</code> specification a requirement. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Guest appearances on *The Joe Rogan Experience*
https://bldavies.com/blog/guest-appearances-joe-rogan-experience/
Wed, 26 Sep 2018 00:00:00 +0000https://bldavies.com/blog/guest-appearances-joe-rogan-experience/<p><a href="https://www.joerogan.com/#jre-section"><em>The Joe Rogan Experience</em></a> (<em>JRE</em>) is a podcast hosted by comedian and mixed martial arts (MMA) commentator Joe Rogan.
In this post, I analyse the relationship between <em>JRE</em> guest appearances and popularity using data from <a href="https://trends.google.com/trends">Google Trends</a>.
I find that guests typically experience a spike in popularity immediately after appearing on the podcast.</p>
<p>The data used in my analysis are available <a href="https://github.com/bldavies/jre-guests">here</a>.</p>
<h2 id="collecting-the-data">Collecting the data</h2>
<p>I scrape <a href="http://podcasts.joerogan.net">the <em>JRE</em> podcast directory</a> for a list of episode dates, numbers and titles.
The directory comprises a multi-page table that is dynamically updated using HTTP requests.
I use <a href="https://stackoverflow.com/a/46311833">this method</a> to emulate such requests, allowing me to iterate over table pages and extract the raw episode metadata.
I clean these data by</p>
<ol>
<li>removing non-standard episodes (such as MMA Shows and Fight Companions),</li>
<li>fixing any missing, incorrect or duplicate episode numbers, and</li>
<li>removing non-ASCII characters from episode titles.</li>
</ol>
<p><a href="https://github.com/bldavies/jre-guests/blob/master/data/episodes.csv">The resulting file</a> contains clean metadata for <em>JRE</em> episodes #1 through #1172.
I use these data to create <a href="https://github.com/bldavies/jre-guests/blob/master/data/guests.csv">a list of guests</a> that appear in each episode, making several manual adjustments that correct for inconsistent or missing guest names.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup></p>
<p>The barchart below plots the number of episodes, unique guests and first appearances by year for 2010 through 2018.
On average, the number of <em>JRE</em> episodes and guests increased each year, although the proportion of guests appearing on the show for the first time appears to be falling.</p>
<p><img src="figures/annual-counts-1.svg" alt=""></p>
<h2 id="estimating-popularity">Estimating popularity</h2>
<p>I infer guests’ popularity from Google Trends data on web searches in the United States.
These data index the proportion of total Google search queries attributable to particular keywords.
Google Trends provides data on a 0–100 scale, where 100 denotes the maximum search interest for the corresponding keyword in a given period and locale.<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup></p>
<p>I collect Google Trends data for each identified <em>JRE</em> guest and for Joe himself.
<a href="https://github.com/bldavies/jre-guests/blob/master/data/popularity.csv">My data</a> provide weekly estimates of individuals’ online popularity for the five years beginning September 2013.
I assume that these data are unbiased estimates of guests’ actual popularity.</p>
<p>The chart below plots Joe’s estimated popularity during my sample period.
Web search interest for the phrase “Joe Rogan” more than doubled between September 2013 and September 2018.
The spike during the first week of September 2018 marks <a href="https://www.youtube.com/watch?v=ycPr5-27vSI"><em>JRE</em> episode #1169 with Elon Musk</a>.</p>
<p><img src="figures/joe-rogan-popularity-1.svg" alt=""></p>
<h2 id="identifying-popularity-spikes">Identifying popularity spikes</h2>
<p>I align <em>JRE</em> guest appearance dates with my Google Trends data in order to determine whether such appearances coincide with popularity spikes.
I identify spikes as large, sudden deviations in search interest from its mean value.
I allow this mean to change over time by defining a moving average (MA) series, which I subtract from the actual interest series in order to construct a demeaned series that captures the idosyncratic variation in guests’ popularity.<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup></p>
<p>For example, the chart below plots the actual, moving average and demeaned search interest series for Dave Rubin—political commentator and host of <a href="https://www.rubinreport.com"><em>The Rubin Report</em></a>—who appeared on <em>The Joe Rogan Experience</em> in the three weeks identified by the dashed vertical lines.
Dave’s gradual rise in popularity since late 2015 is punctuated by three spikes in search interest that coincide with his <em>JRE</em> appearances.</p>
<p><img src="figures/dave-rubin-popularity-1.svg" alt=""></p>
<p>I construct the demeaned search interest series for each guest who appears on <em>The Joe Rogan Experience</em> during my sample period.
I standardise each of these series to have zero mean and unit variance across the entire sample period in order to make the series comparable.
The distributions of guests’ standardised demeanded search interest in the weeks surrounding their appearances are shown below.</p>
<p><img src="figures/densities-1.svg" alt=""></p>
<p>In the two weeks prior to appearing on <em>The Joe Rogan Experience</em>, guests’ popularities are centred about a standard deviation below their MA trend value, reflecting a rise in that value due to an impending upward shock.
Appearances coincide with a shift in probability density towards positive deviations from local means.
Traces of this shift disappear after about three weeks, at which time the distribution of standardised demeaned search interest mimics that observed five weeks prior.
These dynamics suggest that, on average, <em>JRE</em> guests experience an increase in popularity during the week in which they appear on the podcast.</p>
<h2 id="detecting-spikes-in-real-time">Detecting spikes in real-time</h2>
<p>I obtain more rigorous results using <a href="https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/22640362#22640362">this real-time spike detection algorithm</a>.
The algorithm builds a filtering series alongside the actual search interest series, and computes a rolling mean and standard deviation for the filtering series over the previous <code>lag</code> observations.
Spikes correspond to values in the actual series that deviate from the filtering mean by some <code>threshold</code> number of standard deviations.
A third parameter <code>influence</code> controls how sensitive the filtering series is to spikes.</p>
<p>The real-time algorithm defines a signal series that denotes super-threshold deviations above and below the filtering mean by 1 and -1, respectively, and sub-threshold deviations by 0.
Positive signals identify spikes in search interest relative to recent trends.
The rate at which such signals coincide with <em>JRE</em> guest appearances offers insight into whether such appearances herald popularity spikes.</p>
<p>For example, the chart below plots the actual, filtering and signal series for Dave Rubin’s estimated popularity during my sample period, along with the dates of his three <em>JRE</em> appearances.
I compute the filtering means and standard deviations with <code>lag</code> equal to 12, and set the filtering threshold at two standard deviations from the filtering mean.
Positive signals register when the actual series deviates above the grey band.</p>
<p><img src="figures/dave-rubin-signal-1.svg" alt=""></p>
<p>The real-time algorithm identifies spikes coincident with each of Dave’s appearances on <em>The Joe Rogan Experience</em>.
However, it also identifies false positives that reflect other sources of sudden popularity booms.</p>
<p>I compute the empirical probability that the real-time algorithm detects a spike in guests’ popularity conditional upon their appearing on <em>The Joe Rogan Experience</em> in the same or previous week.<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup>
The table below reports this probability for a range of <code>lag</code> and <code>threshold</code> values, and with <code>influence</code> equal to 0.5.<sup id="fnref:5"><a href="#fn:5" class="footnote-ref" role="doc-noteref">5</a></sup></p>
<table>
<thead>
<tr>
<th align="center">Pr(Spike | Appears)</th>
<th align="center"><code>lag = 3</code></th>
<th align="center"><code>lag = 6</code></th>
<th align="center"><code>lag = 9</code></th>
<th align="center"><code>lag = 12</code></th>
</tr>
</thead>
<tbody>
<tr>
<td align="center"><strong><code>threshold = 1</code></strong></td>
<td align="center">0.940</td>
<td align="center">0.923</td>
<td align="center">0.905</td>
<td align="center">0.892</td>
</tr>
<tr>
<td align="center"><strong><code>threshold = 2</code></strong></td>
<td align="center">0.896</td>
<td align="center">0.866</td>
<td align="center">0.845</td>
<td align="center">0.824</td>
</tr>
<tr>
<td align="center"><strong><code>threshold = 3</code></strong></td>
<td align="center">0.837</td>
<td align="center">0.808</td>
<td align="center">0.771</td>
<td align="center">0.748</td>
</tr>
<tr>
<td align="center"><strong><code>threshold = 4</code></strong></td>
<td align="center">0.791</td>
<td align="center">0.753</td>
<td align="center">0.725</td>
<td align="center">0.696</td>
</tr>
</tbody>
</table>
<p>Increasing <code>lag</code> or <code>threshold</code> lowers the detection rate, indicating that the real-time algorithm is more likely to identify guest appearances when it is more adaptive and less picky.
The negative relationship between detection rate and <code>lag</code> (with <code>threshold</code> held constant) suggests that, on average, guests’ popularities are more volatile over longer horizons: the further back you look in search history, the more likely you are to remember shocks and so the larger new shocks must be to seem uncommon.</p>
<h2 id="conclusion">Conclusion</h2>
<p>In general, appearing on the <em>The Joe Rogan Experience</em> seems to coincide with a spike in popularity as measured by web search interest.
This result is robust to varying the definition of “spike,” at least along the dimensions of the <code>lag</code> and <code>threshold</code> parameters used by the real-time detection algorithm.</p>
<p>While suggestive, my analysis is not causal because I do not compare my results with the counterfactual scenario in which treatments (i.e., <em>JRE</em> appearances) do not occur.
The false positives identified by the real-time algorithm are reminders that my results may be driven by other confounding factors.</p>
<p>It would be useful to compare guests’ popularity dynamics near <em>JRE</em> appearances with those near appearances on other fora.
This comparison would help me separate the effect of increased online presense in general from the effect of appearing on <em>The Joe Rogan Experience</em> in particular, and may thereby provide stronger hints at causality.</p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>I exclude Brian Redban’s appearances prior to episode #674, when he returned as a guest for the first time after producing and co-hosting the show until late 2013. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p><a href="https://support.google.com/trends/answer/4365533?hl=en&ref_topic=6248052">Google Trends’ FAQ</a> does not identify how the raw search proportions get mapped to [0, 100]. I assume that the map is linear so that, for example, an increase from 25 to 50 and from 50 to 100 both constitute a doubling in popularity. <a href="#fnref:2" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:3" role="doc-endnote">
<p>I use an MA order of seven. Thus, each observation in the moving average series is equal to the mean value over the two surrounding months in the actual series. This choice seems to optimally suppress the impact of spikes on local means. <a href="#fnref:3" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:4" role="doc-endnote">
<p>Google Trends provides data in weekly intervals with weeks starting on Saturdays. I include lagged weeks in the detection criterion to allow for latency between <em>JRE</em> episode transmission and audience response. For example, the web search activity attributable to an episode aired on a Friday may not occur until the Saturday that begins the following week. <a href="#fnref:4" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:5" role="doc-endnote">
<p>I obtain similar patterns with <code>influence</code> equal to 0.3 and 0.7. <a href="#fnref:5" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Coauthorship networks at Motu
https://bldavies.com/blog/coauthorship-networks-motu/
Thu, 21 Jun 2018 00:00:00 +0000https://bldavies.com/blog/coauthorship-networks-motu/<p>Earlier this year I joined <a href="https://motu.nz">Motu</a>, an economic and public policy research institute based in Wellington, New Zealand.
In this post, I analyse the coauthorship network among Motu researchers based on working paper publications.
The data used in my analysis are available <a href="https://github.com/bldavies/motuwp/">here</a>.</p>
<h2 id="collecting-and-preparing-the-data">Collecting and preparing the data</h2>
<p>Bibliographic data are notoriously uncooperative.
Changes in author or institution names make it difficult to uniquely identify researchers across time, reducing data consistency and completeness.
Moreover, most bibliographic databases charge an access fee that discourages casual exploration.
Fortunately, <a href="https://motu.nz/resources/working-papers/">Motu’s working paper directory</a> is presented in a consistent format that makes it amenable to web scraping free of charge.</p>
<p>The R script <a href="https://github.com/bldavies/motuwp/tree/8f4b1c02e04f8e5e45b4325195bb4f03ac0ee707/code/data.R"><code>data.R</code></a> scrapes the directory for a list of working paper IDs and URLs.
Each URL points to a landing page for the corresponding paper, which I scrape for a list of authors.
I include only those authors with outgoing hyperlinks because</p>
<ol>
<li>the hyperlinked URL provides a unique and persistent author ID, and</li>
<li>it is much easier to perform a regular expression search for <code><a href="(.*?)"></code> than to distinguish different uses of commas case-by-case.</li>
</ol>
<p>The resulting file <a href="https://github.com/bldavies/motuwp/tree/8f4b1c02e04f8e5e45b4325195bb4f03ac0ee707/data/authors.csv"><code>authors.csv</code></a> contains each unique author-paper pair.
It excludes the authors of five papers for which either (i) there is no landing page linked from the main directory or (ii) the landing page has no authors with outgoing hyperlinks.</p>
<p>I read in <code>authors.csv</code> and two other tables:
<a href="https://github.com/bldavies/motuwp/tree/8f4b1c02e04f8e5e45b4325195bb4f03ac0ee707/data/areas.csv"><code>areas.csv</code></a>, which contains the name, ID and ambient colour for each of <a href="https://motu.nz/our-work/">Motu’s six primary research areas</a>; and
<a href="https://github.com/bldavies/motuwp/tree/8f4b1c02e04f8e5e45b4325195bb4f03ac0ee707/data/papers.csv"><code>papers.csv</code></a>, which links each paper to its research area.
I merge these data into a single tibble <code>data</code>:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">dplyr</span><span class="p">)</span>
<span class="n">data</span> <span class="o"><-</span> <span class="n">authors</span> <span class="o">%>%</span>
<span class="nf">left_join</span><span class="p">(</span><span class="n">papers</span><span class="p">)</span> <span class="o">%>%</span>
<span class="nf">left_join</span><span class="p">(</span><span class="n">areas</span><span class="p">)</span>
</code></pre></div><h2 id="the-authorship-network">The authorship network</h2>
<p>I next construct an authorship network by pairing papers with their authors using the information contained in <code>data</code>.
I achieve this by defining an author-paper incidence matrix</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">incidence</span> <span class="o"><-</span> <span class="nf">table</span><span class="p">(</span><span class="n">data</span><span class="o">$</span><span class="n">author</span><span class="p">,</span> <span class="n">data</span><span class="o">$</span><span class="n">paper</span><span class="p">)</span>
</code></pre></div><p>and using that matrix to create a bipartite network <code>bip</code>:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">igraph</span><span class="p">)</span>
<span class="n">bip</span> <span class="o"><-</span> <span class="nf">graph.incidence</span><span class="p">(</span><span class="n">incidence</span><span class="p">)</span>
</code></pre></div><p>The authorship network <code>bip</code> contains 74 authors who collectively wrote 232 working papers over the 2003–2018 sample period.
Those papers are distributed across Motu’s research areas as shown in the chart below.</p>
<p><img src="figures/area-counts-1.svg" alt=""></p>
<p>The variation in working paper counts reflects the variation in areas’ tenure within Motu’s research portfolio.
Environment and Resources, contributing 67 working papers, has been around since the series began; Human Rights, appearing only once in the series, is a relatively new research area for Motu.</p>
<p>The authorship network <code>bip</code> is drawn below using <a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/spe.4380211102">Fruchterman and Reingold’s (1991)</a> force-directed algorithm.
Squares denote working papers and are coloured by research area.
Each circle denotes an author and is scaled according to the number of working papers (co)written by that author.</p>
<p><img src="figures/author-network-1.svg" alt=""></p>
<p>A striking feature of <code>bip</code> is the presence of three high-degree vertices, or <em>hubs</em>, each representing an author of at least 48 working papers.
These hubs are shaded in the map of <code>bip</code> shown above.
Another feature is the variation in area diversity within authors’ individual corpuses.
Urban and Regional authors tend to also write papers on Wellbeing and Macroeconomics, while Environment and Resources authors are more specialised.</p>
<h2 id="the-coauthorship-network">The coauthorship network</h2>
<p>Projecting <code>bip</code> onto the set of authors yields a coauthorship network in which two authors are adjacent if they have written a paper together.
I define such a projection via</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">net</span> <span class="o"><-</span> <span class="nf">bipartite.projection</span><span class="p">(</span><span class="n">bip</span><span class="p">)</span><span class="n">[[1]]</span>
</code></pre></div><p>I use the <a href="https://github.com/bldavies/pokenet/blob/master/code/jaccard.R"><code>jaccard</code></a> function described in my previous post to determine the similarity between two authors from their authorship counts.
According to this measure, maximally similar authors always write together while maximally dissimilar authors never write together.
Again, I use the Fruchterman-Reingold algorithm for distributing vertices in the plane.
The resulting map of <code>net</code> is shown below.</p>
<p><img src="figures/coauthor-network-1.svg" alt=""></p>
<p>The coauthorship network is sparse, containly only 168 (about 6%) of the 2,701 possible edges between its 74 vertices.
However, the largest connected component (LCC) of <code>net</code> contains all but six authors, two of whom write exclusively with each other and the remaining four having zero coauthors.
Such connectivity is facilitated by the three shaded hubs identified above.</p>
<h3 id="hints-of-small-worldness">Hints of small-worldness</h3>
<p>The sparsity of <code>net</code> implies that most pairs of authors aren’t coauthors.
Indeed, the probability that two randomly selected authors are coauthors is given by <code>net</code>'s edge density: about 0.06.
However, it is not unusual for two randomly selected authors to share a common coauthor; within the LCC of <code>net</code>, the probability of such an event is about 0.46.
I calculate this probability by examining the distribution of (unweighted) <a href="https://en.wikipedia.org/wiki/Distance_%28graph_theory%29">geodesic distances</a> between the vertices in <code>net</code> and determining the proportion of vertex pairs that are distance two apart.
The following function performs that calculation for an arbitrary connected graph <code>G</code>.</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">common_neighbour_rate</span> <span class="o"><-</span> <span class="nf">function </span><span class="p">(</span><span class="n">G</span><span class="p">)</span> <span class="p">{</span>
<span class="n">B</span> <span class="o"><-</span> <span class="nf">distances</span><span class="p">(</span><span class="n">G</span><span class="p">,</span> <span class="n">weights</span> <span class="o">=</span> <span class="nf">rep</span><span class="p">(</span><span class="m">1</span><span class="p">,</span> <span class="nf">gsize</span><span class="p">(</span><span class="n">G</span><span class="p">)))</span> <span class="o">==</span> <span class="m">2</span>
<span class="n">num_pairs</span> <span class="o"><-</span> <span class="nf">choose</span><span class="p">(</span><span class="nf">gorder</span><span class="p">(</span><span class="n">G</span><span class="p">),</span> <span class="m">2</span><span class="p">)</span>
<span class="n">rate</span> <span class="o"><-</span> <span class="p">(</span><span class="nf">sum</span><span class="p">(</span><span class="n">B</span><span class="p">)</span> <span class="o">/</span> <span class="m">2</span><span class="p">)</span> <span class="o">/</span> <span class="n">num_pairs</span> <span class="c1"># Mean within upper right triangle</span>
<span class="nf">return </span><span class="p">(</span><span class="n">rate</span><span class="p">)</span>
<span class="p">}</span>
</code></pre></div><p>The function <code>common_neighbour_rate</code> works by computing the geodesic distances between each pair of vertices in <code>G</code>, defining binary indicator variables (as entries of the matrix <code>B</code>) for whether each distance is equal to two and taking the average of those variables over all possible vertex pairs.
Its name comes from recognising that “coauthor” is a context-specific synonym for “neighbouring vertex.”</p>
<p>Within the LCC of <code>net</code>, the average distance between any two authors is equal to 2.5 while the maximum such distance—the <em>diameter</em> of the LCC—is equal to five.
These numbers suggest a smallness about the world inhabited by Motu working paper authors: if you ask anyone if they’ve written a paper with so-and-so, the answer you’ll get is probably, “no, but I’ve written with someone who has written with someone that has.”
It appears that, at least in terms of geodesic distances, Motu researchers are seldom far apart.</p>
<h3 id="testing-for-small-worldness">Testing for small-worldness</h3>
<p><a href="https://www.nature.com/articles/30918">Watts and Strogatz (1998)</a> formalise the idea of small-worldness.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>
They identify small-world networks as those that are</p>
<blockquote>
<p>highly clustered … yet have small characteristic path lengths.</p>
</blockquote>
<p>The extent to which a network is clustered is determined by its <a href="https://en.wikipedia.org/wiki/Clustering_coefficient#Global_clustering_coefficient">clustering coefficient</a>, while the characteristic path length is simply the mean geodesic distance between pairs of vertices.
Intuitively, a network is small-world if it has local communities whose links are mostly internal but with a few external links that facilitate fast inter-community exchange.
For example, most flights undertaken by New Zealanders comprise travel within our dense domestic network, but a Cantabrian wanting to holiday in Bangkok or Dubai need only make a pitstop in Sydney.
The latter acts as a hub that connects many distant cities in the same way that the three shaded vertices in the map of <code>net</code> above connect many otherwise distant authors.</p>
<p><a href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0002051">Humphries and Gurney (2008)</a> describe a method for determining small-worldness using random graphs.
Their strategy is to compare the clustering coefficient and mean distance between vertices in a network to the expected value of those attributes if edges are randomly distributed.
Concretely, they state that</p>
<blockquote>
<p>A network with <code>n</code> nodes and <code>m</code> edges is a small-world network if it has a similar path length but greater clustering of nodes than an equivalent Erdös-Rényi random graph with the same <code>n</code> and <code>m</code>.</p>
</blockquote>
<p>The <a href="https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93R%C3%A9nyi_model">Erdös-Rényi model</a> is a simple method of generating random graphs with a fixed number of vertices and edges, the latter being placed between vertex pairs with uniform probability and without duplication.
Such graphs tend to have short mean distances because edges are as likely to traverse the network and bridge communities as they are to consolidate an already tight local community.
Likewise, random edge assignment disregards community formation, causing Erdös-Rényi graphs to have small clustering coefficients.</p>
<p>The function below computes the clustering coefficient (known to <code>igraph</code> users as <code>transitivity</code>) and characteristic path length for a sample of Erdös-Rényi random graphs that are equivalent to an arbitrary graph <code>G</code>.
The sample means of these attributes provide baselines against which to measure the corresponding values observed from <code>G</code>.</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">small_world_baselines</span> <span class="o"><-</span> <span class="nf">function </span><span class="p">(</span><span class="n">G</span><span class="p">,</span> <span class="n">sample_size</span> <span class="o">=</span> <span class="m">1000</span><span class="p">,</span> <span class="n">seed</span> <span class="o">=</span> <span class="m">0</span><span class="p">)</span> <span class="p">{</span>
<span class="nf">set.seed</span><span class="p">(</span><span class="n">seed</span><span class="p">)</span>
<span class="n">transitivity_samples</span> <span class="o"><-</span> <span class="nf">rep</span><span class="p">(</span><span class="m">0</span><span class="p">,</span> <span class="n">sample_size</span><span class="p">)</span>
<span class="n">mean_distance_samples</span> <span class="o"><-</span> <span class="nf">rep</span><span class="p">(</span><span class="m">0</span><span class="p">,</span> <span class="n">sample_size</span><span class="p">)</span>
<span class="nf">for </span><span class="p">(</span><span class="n">i</span> <span class="n">in</span> <span class="m">1</span> <span class="o">:</span> <span class="n">sample_size</span><span class="p">)</span> <span class="p">{</span>
<span class="n">er</span> <span class="o"><-</span> <span class="nf">sample_gnm</span><span class="p">(</span><span class="nf">gorder</span><span class="p">(</span><span class="n">G</span><span class="p">),</span> <span class="nf">gsize</span><span class="p">(</span><span class="n">G</span><span class="p">))</span>
<span class="n">transitivity_samples[i]</span> <span class="o"><-</span> <span class="nf">transitivity</span><span class="p">(</span><span class="n">er</span><span class="p">)</span>
<span class="n">mean_distance_samples[i]</span> <span class="o"><-</span> <span class="nf">mean_distance</span><span class="p">(</span><span class="n">er</span><span class="p">,</span> <span class="n">directed</span> <span class="o">=</span> <span class="kc">FALSE</span><span class="p">)</span>
<span class="p">}</span>
<span class="nf">return </span><span class="p">(</span><span class="nf">list</span><span class="p">(</span><span class="n">transitivity</span> <span class="o">=</span> <span class="nf">mean</span><span class="p">(</span><span class="n">transitivity_samples</span><span class="p">),</span>
<span class="n">mean_distance</span> <span class="o">=</span> <span class="nf">mean</span><span class="p">(</span><span class="n">mean_distance_samples</span><span class="p">)))</span>
<span class="p">}</span>
</code></pre></div><p>The coauthorship network <code>net</code> has clustering coefficient 0.24 and mean distance 2.49, with baseline comparators of 0.06 and 2.96.
Thus, <code>net</code> is about four times as clustered as is expected for a network with its density and has slightly shorter geodesic distances than would be obtained by allocating edges randomly.
These facts positively indicate small-worldness, and reflect widespread collaboration between authors within and between research areas.</p>
<p>Humphries and Gurney define a <em>small-world coefficient</em> by taking the ratio of observed and expected clustering coefficients, and dividing the result by the ratio of observed and expected mean distances.
This quotient is larger than one for small-world networks.
The coauthorship network <code>net</code> obtains a small-world coefficient of 4.67, thereby passing the Humphries-Gurney small-worldness test.</p>
<h2 id="subsampling-by-research-area">Subsampling by research area</h2>
<p>Finally, I analyse the coauthorship network within Motu’s five largest research areas.
I filter the working papers from <code>data</code> that correspond to each area and recompute several statistics mentioned earlier using the subsample data.
The first set of statistics is shown in the table below.</p>
<table>
<thead>
<tr>
<th align="left">Area</th>
<th align="right">Papers</th>
<th align="right">Authors</th>
<th align="right">Edge density</th>
<th align="right">LCC order</th>
<th align="right">LCC diameter</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Environment and Resources</td>
<td align="right">67</td>
<td align="right">37</td>
<td align="right">0.08</td>
<td align="right">29</td>
<td align="right">3</td>
</tr>
<tr>
<td align="left">Population and Labour</td>
<td align="right">56</td>
<td align="right">29</td>
<td align="right">0.13</td>
<td align="right">26</td>
<td align="right">4</td>
</tr>
<tr>
<td align="left">Urban and Regional</td>
<td align="right">50</td>
<td align="right">32</td>
<td align="right">0.09</td>
<td align="right">29</td>
<td align="right">4</td>
</tr>
<tr>
<td align="left">Wellbeing and Macroeconomics</td>
<td align="right">35</td>
<td align="right">19</td>
<td align="right">0.13</td>
<td align="right">14</td>
<td align="right">2</td>
</tr>
<tr>
<td align="left">Productivity and Innovation</td>
<td align="right">23</td>
<td align="right">18</td>
<td align="right">0.20</td>
<td align="right">17</td>
<td align="right">4</td>
</tr>
</tbody>
</table>
<p>Environment and Resources boasts the largest number of authors as well as working papers. However, it has the least dense coauthorship network, containing only 8% of all possible edges.
The Productivity and Innovation coauthorship network is the most dense.
The largest connected component of the Wellbeing and Macroeconomics coauthorship network is the smallest among the five areas; however, every pair of authors within its LCC are coauthors or share a common coauthor.</p>
<p>I also test each area’s coauthorship network for small-worldness using the Humphries-Gurney procedure.
The results are tabulated below.</p>
<table>
<thead>
<tr>
<th align="left">Area</th>
<th align="right">Clustering coefficient (baseline)</th>
<th align="right">Mean distance (baseline)</th>
<th align="right">Small-world coefficient</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Environment and Resources</td>
<td align="right">0.25 (0.08)</td>
<td align="right">1.93 (3.16)</td>
<td align="right">5.13</td>
</tr>
<tr>
<td align="left">Population and Labour</td>
<td align="right">0.33 (0.13)</td>
<td align="right">2.15 (2.56)</td>
<td align="right">3.01</td>
</tr>
<tr>
<td align="left">Urban and Regional</td>
<td align="right">0.17 (0.09)</td>
<td align="right">2.13 (3.04)</td>
<td align="right">2.71</td>
</tr>
<tr>
<td align="left">Wellbeing and Macroeconomics</td>
<td align="right">0.19 (0.11)</td>
<td align="right">1.77 (2.88)</td>
<td align="right">2.76</td>
</tr>
<tr>
<td align="left">Productivity and Innovation</td>
<td align="right">0.39 (0.19)</td>
<td align="right">2.24 (2.31)</td>
<td align="right">2.17</td>
</tr>
</tbody>
</table>
<p>All five areas have small-world coefficients greater than one, and therefore satisfy Humphries and Gurney’s criterion.
However, the ratio of observed and baseline clustering coefficients is not as large in any area as it is in the full coauthorship network.
Moreover, only two areas have mean distances close to those expected in an equivalent Erdös-Rényi random graph.
The best candidate for a small world—that is, a world with high clustering and as-random geodesic distances—is the Productivity and Innovation coauthorship network, despite it having the lowest small-world coefficient.</p>
<p>I suspect that network size adds considerable noise to these estimates.
Even the full coauthorship network <code>net</code> is barely large enough to exhibit any global structure that can be distinguished from randomness.
Applying the Humphries-Gurney test to a larger network, or implementing a more robust procedure such as that proposed by <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3604768/">Telesford <em>et al.</em> (2011)</a>, may yield cleaner results.</p>
<p><em>Note: I updated this post on July 28, 2019 after revising the <a href="https://github.com/bldavies/motuwp/">source data</a>. My results changed slightly due to retroactive author (re)assignments.</em></p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>The linked article is locked behind a paywall. However, Strogatz hosts <a href="http://www.stevenstrogatz.com/articles/collective-dynamics-of-small-world-networks-pdf">a free copy</a> on his website. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Habitat choices of first-generation Pokémon
https://bldavies.com/blog/habitat-choices-first-generation-pokemon/
Thu, 01 Mar 2018 00:00:00 +0000https://bldavies.com/blog/habitat-choices-first-generation-pokemon/<p>In this post, I use R’s <a href="http://igraph.org">igraph</a> package to analyse the cohabitation network among wild Pokémon species.
The underlying data come from <a href="https://github.com/veekun/pokedex">the GitHub repository</a> behind <a href="https://veekun.com">veekun</a>.</p>
<h2 id="matching-species-with-their-habitats">Matching species with their habitats</h2>
<p>I infer habitats from random encounter events in the international versions of Pokémon Red, Blue and Yellow.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>
I store these events in a data frame named <code>encounters</code>.
Each encounter has three attributes: the <code>location</code>, the <code>species</code> encountered and that species’ primary <code>type</code>.
I use these data to generate a species-location incidence matrix:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">habits</span> <span class="o"><-</span> <span class="nf">table</span><span class="p">(</span><span class="n">encounters</span><span class="o">$</span><span class="n">species</span><span class="p">,</span> <span class="n">encounters</span><span class="o">$</span><span class="n">location</span><span class="p">)</span>
</code></pre></div><p>The rows and columns of <code>habits</code> count where species habitate.
For example, summing the rows of <code>habits</code> yields the number of unique habitats for each species.
I store these sums as follows:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">pokemon</span> <span class="o"><-</span> <span class="nf">tibble</span><span class="p">(</span><span class="n">species</span> <span class="o">=</span> <span class="nf">rownames</span><span class="p">(</span><span class="n">habits</span><span class="p">),</span> <span class="n">ubiquity</span> <span class="o">=</span> <span class="nf">rowSums</span><span class="p">(</span><span class="n">habits</span><span class="p">))</span>
</code></pre></div><p>Goldeen, Magikarp and Poliwag are the most ubiquitous species.
Each habitate in 24 unique locations across the Kanto region.</p>
<p>The boxplots below show the distribution of <code>ubiquity</code> by species’ primary type.
Water-types have the highest median ubiquity, closely followed by Grass- and Normal-types.
Species with Dragon, Fairy or Ghost as their primary type each habitate in a single location.</p>
<p><img src="figures/ubiquity-distribution-1.svg" alt=""></p>
<p>The column sums of <code>habits</code> count the number of unique species that habitate in each location.
I store these sums as follows:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">locations</span> <span class="o"><-</span> <span class="nf">tibble</span><span class="p">(</span><span class="n">name</span> <span class="o">=</span> <span class="nf">colnames</span><span class="p">(</span><span class="n">habits</span><span class="p">),</span> <span class="n">diversity</span> <span class="o">=</span> <span class="nf">colSums</span><span class="p">(</span><span class="n">habits</span><span class="p">))</span>
</code></pre></div><p>I compute the mean value of <code>diversity</code> across the locations in which each species habitates via</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">pokemon</span><span class="o">$</span><span class="n">mean_diversity</span> <span class="o"><-</span> <span class="nf">colSums</span><span class="p">(</span><span class="nf">t</span><span class="p">(</span><span class="n">habits</span><span class="p">)</span> <span class="o">*</span> <span class="n">locations</span><span class="o">$</span><span class="n">diversity</span><span class="p">)</span> <span class="o">/</span> <span class="n">pokemon</span><span class="o">$</span><span class="n">ubiquity</span>
</code></pre></div><p><code>ubiquity</code> and <code>mean_diversity</code> share a correlation coefficient of about -0.22, suggesting that they share a weak negative relationship.
Thus, on average, more ubiquitous species tend to live in less diverse locations.
However, this relationship is skewed by a large number of species that cohabitate in one or two locations as shown in the chart below.</p>
<p><img src="figures/ubiquity-mean-diversity-1.svg" alt=""></p>
<p>The chart plots <code>mean_diversity</code> against <code>ubiquity</code>, along with the least-squares line of best fit.<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>
The top-left cluster comprises species that exclusively habitate inside Cerulean Cave or the Kanto Safari Zone.
This cluster has a strong positive effect on <code>mean_diversity</code> among species with low <code>ubiquity</code> values, driving the negative relationship between the two attributes.</p>
<h2 id="the-cohabitation-network">The cohabitation network</h2>
<p>Species reveal their preference toward spending time with each other through their choice of whether to share habitats.
The more frequently two species cohabitate, the stronger is their implied social connection.
The number of locations in which two species cohabitate is equal to the cross product of the two corresponding rows of <code>habits</code>.
I store these counts in a symmetric species-species adjacency matrix:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">cohabits</span> <span class="o"><-</span> <span class="n">habits</span> <span class="o">%*%</span> <span class="nf">t</span><span class="p">(</span><span class="n">habits</span><span class="p">)</span>
</code></pre></div><p>Each entry <code>cohabits[i, j]</code> is equal to the number of locations in which species <code>i</code> and <code>j</code> cohabitate, and each diagonal entry <code>cohabits[i, i]</code> is equal to the ubiquity of species <code>i</code>.</p>
<h3 id="estimating-the-strength-of-species-social-ties">Estimating the strength of species’ social ties</h3>
<p>The raw cohabitation counts are an imperfect measure of the strength of the social ties between species.
For example, ubiquitous species tend to have higher cohabitation counts with all other species and so appear to be more social.
However, having many social connections may indicate that a species “spreads itself thin” and that each of its connections are actually quite weak.
Strong connections arise when two species spend lots of their time together and little of their time apart.</p>
<p>The <a href="https://en.wikipedia.org/wiki/Jaccard_index">Jaccard index</a> provides a convenient measure of the tendency for two species to spend most of their time in each others’ company.
The index counts the number of locations in which two species cohabitate as a proportion of the locations in which at least one of those species habitates.
I define a function <code>jaccard</code> for computing Jaccard indices from an arbitrary cohabitation matrix <code>C</code> as follows.</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">jaccard</span> <span class="o"><-</span> <span class="nf">function </span><span class="p">(</span><span class="n">C</span><span class="p">)</span> <span class="p">{</span>
<span class="n">U</span> <span class="o"><-</span> <span class="nf">matrix</span><span class="p">(</span><span class="nf">rep</span><span class="p">(</span><span class="nf">diag</span><span class="p">(</span><span class="n">C</span><span class="p">),</span> <span class="nf">nrow</span><span class="p">(</span><span class="n">C</span><span class="p">)),</span> <span class="n">ncol</span> <span class="o">=</span> <span class="nf">nrow</span><span class="p">(</span><span class="n">C</span><span class="p">))</span>
<span class="n">H</span> <span class="o"><-</span> <span class="n">U</span> <span class="o">+</span> <span class="nf">t</span><span class="p">(</span><span class="n">U</span><span class="p">)</span> <span class="o">-</span> <span class="n">C</span>
<span class="n">J</span> <span class="o"><-</span> <span class="n">C</span> <span class="o">/</span> <span class="n">H</span>
<span class="nf">return </span><span class="p">(</span><span class="n">J</span><span class="p">)</span>
<span class="p">}</span>
</code></pre></div><p>If <code>C = cohabits</code> then each column of <code>U</code> is equal to the vector <code>pokemon$ubiquity</code>, and each entry <code>H[i, j]</code> of <code>H</code> counts the number of locations in which at least one of species <code>i</code> and <code>j</code> habitate.
The Jaccard index <code>J[i, j]</code> obtains its maximum value of unity when species <code>i</code> and <code>j</code> habitate in precisely the same locations, and its minimum value of zero when they never cohabitate.
The more similar two species’ habitat choices, the higher is their shared Jaccard index.</p>
<p>I define the cohabitation network <code>net</code> as the weighted graph with adjacency matrix equal to <code>jaccard(cohabits)</code>:</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="nf">library</span><span class="p">(</span><span class="n">igraph</span><span class="p">)</span>
<span class="n">net</span> <span class="o"><-</span> <span class="nf">graph.adjacency</span><span class="p">(</span><span class="nf">jaccard</span><span class="p">(</span><span class="n">cohabits</span><span class="p">),</span> <span class="n">weighted</span> <span class="o">=</span> <span class="bp">T</span><span class="p">,</span> <span class="n">mode</span> <span class="o">=</span> <span class="s">'undirected'</span><span class="p">)</span>
<span class="n">net</span> <span class="o"><-</span> <span class="nf">simplify</span><span class="p">(</span><span class="n">net</span><span class="p">)</span> <span class="c1"># Remove loops</span>
</code></pre></div><h3 id="identifying-the-strongest-connections">Identifying the strongest connections</h3>
<p>The cohabitation network contains 1,549 (about 31%) of the 4,950 possible edges between its 100 vertices.
However, many of these edges have low weight and correspond to weak social connections between species, whereas I’m most interested in identifying which species share strong connections.</p>
<p>I identify an edge-induced subgraph of <code>net</code> that represents the strongest connections as follows.<sup id="fnref:3"><a href="#fn:3" class="footnote-ref" role="doc-noteref">3</a></sup>
First, I find a maximum spanning forest (MSF) of <code>net</code>; that is, an edge-induced subgraph that</p>
<ol>
<li>has the same vertex set as <code>net</code>,</li>
<li>has trees as components, and</li>
<li>obtains the maximum edge weight sum over all edge-induced subgraphs satisfying criteria 1 and 2.</li>
</ol>
<p>The MSF joins each species with one of the species with which it most frequently cohabitates.
However, depending on the algorithm used, the MSF generally doesn’t join every species with its most frequent cohabitant and therefore doesn’t necessarily contain the strongest connections in <code>net</code>.<sup id="fnref:4"><a href="#fn:4" class="footnote-ref" role="doc-noteref">4</a></sup>
Accordingly, I augment the MSF by taking its union with the subgraph induced by the edges in <code>net</code> of highest weight.
I choose the number of such edges to be equal to the order of <code>net</code> so as to achieve a mean vertex degree of about four.</p>
<p>I define a function <code>augmented_msf</code> for identifying the augmented MSF of a graph <code>G</code> as follows.</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">augmented_msf</span> <span class="o"><-</span> <span class="nf">function </span><span class="p">(</span><span class="n">G</span><span class="p">)</span> <span class="p">{</span>
<span class="nf">E</span><span class="p">(</span><span class="n">G</span><span class="p">)</span><span class="o">$</span><span class="n">id</span> <span class="o"><-</span> <span class="nf">seq</span><span class="p">(</span><span class="nf">gsize</span><span class="p">(</span><span class="n">G</span><span class="p">))</span>
<span class="n">msf_ids</span> <span class="o"><-</span> <span class="nf">E</span><span class="p">(</span><span class="nf">mst</span><span class="p">(</span><span class="n">G</span><span class="p">,</span> <span class="o">-</span><span class="nf">E</span><span class="p">(</span><span class="n">G</span><span class="p">)</span><span class="o">$</span><span class="n">weight</span><span class="p">))</span><span class="o">$</span><span class="n">id</span>
<span class="n">cutoff</span> <span class="o"><-</span> <span class="nf">quantile</span><span class="p">(</span><span class="nf">E</span><span class="p">(</span><span class="n">G</span><span class="p">)</span><span class="o">$</span><span class="n">weight</span><span class="p">,</span> <span class="p">(</span><span class="nf">gsize</span><span class="p">(</span><span class="n">G</span><span class="p">)</span> <span class="o">-</span> <span class="nf">gorder</span><span class="p">(</span><span class="n">G</span><span class="p">))</span> <span class="o">/</span> <span class="nf">gsize</span><span class="p">(</span><span class="n">G</span><span class="p">))</span><span class="n">[1]</span>
<span class="n">aug_ids</span> <span class="o"><-</span> <span class="nf">which</span><span class="p">(</span><span class="nf">E</span><span class="p">(</span><span class="n">G</span><span class="p">)</span><span class="o">$</span><span class="n">weight</span> <span class="o">>=</span> <span class="n">cutoff</span><span class="p">)</span>
<span class="n">aug_msf</span> <span class="o"><-</span> <span class="nf">subgraph.edges</span><span class="p">(</span><span class="n">G</span><span class="p">,</span> <span class="n">eids</span> <span class="o">=</span> <span class="nf">E</span><span class="p">(</span><span class="n">G</span><span class="p">)</span><span class="nf">[unique</span><span class="p">(</span><span class="nf">c</span><span class="p">(</span><span class="n">msf_ids</span><span class="p">,</span> <span class="n">aug_ids</span><span class="p">))</span><span class="n">]</span><span class="p">)</span>
<span class="nf">return </span><span class="p">(</span><span class="n">aug_msf</span><span class="p">)</span>
<span class="p">}</span>
</code></pre></div><p>The third and fourth lines in the definition of <code>augmented_msf</code> identify the edges of <code>G</code> with which to augment its MSF.
For example, if <code>G</code> has order 20 and size 100 then the MSF of <code>G</code> is augmented by adding those edges in <code>G</code> with weights equal to or greater than the weight of the edge at the 80th percentile.</p>
<!-- This approach adds 143 edges to the MSF of `net`, rather than the predicted 100, because the 100th highest-weight edge in `net` shares a weight of 0.5 with 128 other edges. -->
<h3 id="visualising-the-network">Visualising the network</h3>
<p>The augmented MSF of <code>net</code> contains 242 edges and is drawn below.
Each vertex is coloured according to the corresponding species’ primary type and scaled according to that species’ ubiquity.
I use <a href="http://onlinelibrary.wiley.com/doi/10.1002/spe.4380211102/abstract">Fruchterman and Reingold’s (1991)</a> force-directed algorithm for determining vertices’ layout.</p>
<p><img src="figures/augmented-msf-1.svg" alt=""></p>
<p>The cohabitation network has two components: one large component of 98 different species and many types, and one isolated pair of Ground-types.
The latter contains Diglett and Dugtrio, which habitate exclusively in Diglett’s Cave.
Water-types are most socially connected to other Water-types, suggesting that there are few amphibious species in the Kanto region that spend most of their time in the water.
Poison-types tend to be closely connected to Ground- and Rock-types, which are, presumably, immune to toxicity.</p>
<p>The augmented MSF reveals two large, densely connected clusters of low ubiquity species.
These clusters represent Cerulean Cave and the Kanto Safari Zone, and are directly bridged by Chansey, Parasect and Rhyhorn.
There is also a small cluster of Fire- and Poison-types that cohabitate inside Pokémon Mansion, and a clique of four Bug-types found in Viridian Forest.</p>
<h2 id="estimating-species-social-influence">Estimating species’ social influence</h2>
<p>The structure of <code>net</code> reveals information about species’ social influence.
A simple measure of such influence is the <a href="https://en.wikipedia.org/wiki/Centrality#Degree_centrality">degree centrality</a> of each species, which counts the number of other cohabitating species.
The table below displays the species with the highest six degree centralities in the cohabitation network.</p>
<table>
<thead>
<tr>
<th align="center">Species</th>
<th align="center">Type</th>
<th align="center">Degree</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center">Goldeen</td>
<td align="center">Water</td>
<td align="center">82</td>
</tr>
<tr>
<td align="center">Magikarp</td>
<td align="center">Water</td>
<td align="center">82</td>
</tr>
<tr>
<td align="center">Poliwag</td>
<td align="center">Water</td>
<td align="center">82</td>
</tr>
<tr>
<td align="center">Krabby</td>
<td align="center">Water</td>
<td align="center">69</td>
</tr>
<tr>
<td align="center">Kingler</td>
<td align="center">Water</td>
<td align="center">64</td>
</tr>
<tr>
<td align="center">Ditto</td>
<td align="center">Normal</td>
<td align="center">56</td>
</tr>
</tbody>
</table>
<p>The three most degree-central species are also the three most ubiquitous and cohabitate with 82 of the 99 other species in my sample.
Eight of the 10 most degree-central species are Water-types.</p>
<p>The <a href="https://en.wikipedia.org/wiki/Centrality#Betweenness_centrality">betweenness centrality</a> of each species measures the frequency with which that species lies on the shortest path between others in the cohabitation network.
Intuitively, more betweenness-central species tend to have more control over the spread of information due to their relative criticality in other species’ communication channels.</p>
<p>The six most betweenness-central species are tabulated below.
Goldeen, Magikarp and Poliwag are important conduits of information due to their high ubiquity.
Cubone takes fifth place because it is the only species through which Gastly and Haunter—both found exclusively inside Pokémon Tower—can communicate with species in the Safari Zone.</p>
<table>
<thead>
<tr>
<th align="center">Species</th>
<th align="center">Betweenness</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center">Goldeen</td>
<td align="center">269.15</td>
</tr>
<tr>
<td align="center">Magikarp</td>
<td align="center">269.15</td>
</tr>
<tr>
<td align="center">Poliwag</td>
<td align="center">269.15</td>
</tr>
<tr>
<td align="center">Ditto</td>
<td align="center">202.20</td>
</tr>
<tr>
<td align="center">Cubone</td>
<td align="center">190.00</td>
</tr>
<tr>
<td align="center">Krabby</td>
<td align="center">169.77</td>
</tr>
</tbody>
</table>
<p>The chart below compares species’ betweenness and degree centralities.
With the exception of Cubone, more betweenness-central species tend to have more cohabitants.
Water-types are relatively inefficient at accumulating betweenness centrality when they expand their social network, whereas Electric-types appear to gain a relatively large amount of betweenness centrality per extra cohabitant.</p>
<p><img src="figures/betweenness-degree-1.svg" alt=""></p>
<p>Species with densely connected social networks are unlikely to be very betweenness-central because their cohabitants can share information with each other directly.
The probability that two of a species’ cohabitants also cohabitate is given by the <a href="https://en.wikipedia.org/wiki/Clustering_coefficient#Local_clustering_coefficient">transitivity</a> of the corresponding vertex in <code>net</code>.</p>
<p>The chart below plots species’ betweenness centralities against their transitivity within the cohabitation network.
The two attributes share a strong, negative and convex relationship.
Species whose cohabitants also cohabitate are less betweenness-central because the former lack exclusive control of their cohabitants’ channels for sharing information.
The exceptions to this trend are Cubone and Pikachu, which have unusually high and low betweenness centralities, respectively.
Pikachu habitate in two locations (Viridian Forest and the Kanto Power Plant), each of which contain a small number of species that frequently cohabitate and that generally have much higher degree centralities.
As a result, Pikachu have an unusually low betweenness centrality because their cohabitants are able to communicate with each other directly and with other species indirectly through their wider social networks.</p>
<p><img src="figures/betweenness-transitivity-1.svg" alt=""></p>
<h2 id="the-co-containment-network">The co-containment network</h2>
<p>I recycle my method of analysing the cohabitation network among species in order to explore the co-containment network among locations.
In the latter network, two locations are adjacent if and only if they contain a common species.
I generate the co-containment network from a binary location-location adjacency matrix as follows.</p>
<div class="highlight"><pre class="chroma"><code class="language-r" data-lang="r"><span class="n">cocontains</span> <span class="o"><-</span> <span class="nf">t</span><span class="p">(</span><span class="n">habits</span><span class="p">)</span> <span class="o">%*%</span> <span class="n">habits</span>
<span class="n">cocontains</span> <span class="o"><-</span> <span class="nf">pmin</span><span class="p">(</span><span class="n">cocontains</span><span class="p">,</span> <span class="m">1</span><span class="p">)</span> <span class="c1"># Remove parallel edges</span>
<span class="n">location_net</span> <span class="o"><-</span> <span class="nf">graph.adjacency</span><span class="p">(</span><span class="n">cocontains</span><span class="p">,</span> <span class="n">mode</span> <span class="o">=</span> <span class="s">'undirected'</span><span class="p">)</span>
</code></pre></div><p>The graph <code>location_net</code> contains 542 (about 60%) of the 903 possible edges between its 43 vertices.</p>
<p>The locations with the six highest mean ubiquities are tabulated below.
Viridian City and Pallet Town have the least unique demographies; the few species that habitate in these locations tend to also habitate in many other locations.
That Viridian City’s mean ubiquity and degree centrality are similar suggests that its four habitants usually cohabitate.</p>
<table>
<thead>
<tr>
<th align="center">Location</th>
<th align="center">Mean Ubiquity</th>
<th align="center">Degree</th>
<th align="center">Diversity</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center">Viridian City</td>
<td align="center">21.25</td>
<td align="center">25</td>
<td align="center">4</td>
</tr>
<tr>
<td align="center">Pallet Town</td>
<td align="center">18.40</td>
<td align="center">25</td>
<td align="center">5</td>
</tr>
<tr>
<td align="center">Celadon City</td>
<td align="center">16.40</td>
<td align="center">25</td>
<td align="center">5</td>
</tr>
<tr>
<td align="center">Cerulean City</td>
<td align="center">16.17</td>
<td align="center">25</td>
<td align="center">6</td>
</tr>
<tr>
<td align="center">Cinnabar Island</td>
<td align="center">16.00</td>
<td align="center">25</td>
<td align="center">7</td>
</tr>
<tr>
<td align="center">Route 1</td>
<td align="center">16.00</td>
<td align="center">24</td>
<td align="center">2</td>
</tr>
</tbody>
</table>
<p>Finally, the table below shows the top six most betweenness-central locations.
Route 10 appears to be an important junction for information flow between species.
This is likely due to the diversity of its contained species, and that Routes 10 and 11 boast the highest degree centralities in the co-containment network.
The Safari Zone, another highly diverse location, is also an important information relay.</p>
<table>
<thead>
<tr>
<th align="center">Location</th>
<th align="center">Betweenness</th>
<th align="center">Degree</th>
<th align="center">Diversity</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center">Route 10</td>
<td align="center">58.54</td>
<td align="center">39</td>
<td align="center">18</td>
</tr>
<tr>
<td align="center">Safari Zone</td>
<td align="center">55.89</td>
<td align="center">33</td>
<td align="center">27</td>
</tr>
<tr>
<td align="center">Route 11</td>
<td align="center">23.03</td>
<td align="center">39</td>
<td align="center">15</td>
</tr>
<tr>
<td align="center">Cerulean Cave</td>
<td align="center">20.28</td>
<td align="center">31</td>
<td align="center">28</td>
</tr>
<tr>
<td align="center">Route 6</td>
<td align="center">18.38</td>
<td align="center">38</td>
<td align="center">16</td>
</tr>
<tr>
<td align="center">Sea Route 21</td>
<td align="center">18.38</td>
<td align="center">38</td>
<td align="center">13</td>
</tr>
</tbody>
</table>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>Restricting to random encounters excludes starter Pokémon, species obtainable only through evolution and “special” encounters (e.g., the Electrodes inside the Kanto Power Plant and the legendary birds) from the sample. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>Observations in this and all other charts are coloured by the corresponding species’ primary type, and are plotted with a small amount of noise in order to reveal coincident points that would otherwise be hidden. <a href="#fnref:2" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:3" role="doc-endnote">
<p>This technique is based on <a href="http://science.sciencemag.org/content/317/5837/482">Hidalgo <em>et al.</em>‘s (2007)</a> method of representing the product space of internationally traded goods. <a href="#fnref:3" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:4" role="doc-endnote">
<p>For example, consider applying a greedy algorithm such as <a href="https://en.wikipedia.org/wiki/Prim's_algorithm">Prim’s</a> to a cohabitation network that contains (i) a large clique of species that cohabitate in a single location and (ii) several species that are spread across many different locations. The algorithm will first connect each species in the clique and then, in order to avoid creating cycles, branch out to connect the relatively weakly connected species until a spanning forest is formed. The resulting subgraph will be a MSF but will contain edges that have lower weights than some of the omitted edges in the clique. <a href="#fnref:4" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>