What We Want and What We Get: Error Bias in Investing

As we observe events in realms such as financial markets, politics, or weather, we tend to form beliefs — be they explicit or implicit beliefs — about cause and effect, or whether the events were positive or negative, good or bad. Science has formalized this process: testing a hypothesis with empirical data. One of the tradeoffs when evaluating beliefs in light of evidence, or a hypothesis in light of data, is the type of error we would prefer if our beliefs turn out to be wrong. In the investment realm, this bias can affect our beliefs and behaviors, such as our tolerance for risk and our allocation choices. Several examples will help illustrate this point.

Innocent until proven guilty

In criminal law, we have the presumption that the accused are “innocent until proven guilty.” The presumption of innocence (our “belief”) implies that we would prefer to allow some of the guilty to escape conviction, in error, as opposed to presuming guilt and ensnaring some of the innocent along the way. We could call this presumption of innocence a bias toward false negative errors — those who are guilty yet not convicted.

Better safe than sorry

During an October weekend last year, western Washington state, where Saturna Capital’s offices are located, bolted down for the forecast of an unusually powerful fall windstorm. Several factors conflated to make the forecast particularly dire: the storm involved the remnants of a violent typhoon that was swept up in the fast-moving jet stream east of Japan and rapidly regaining strength as a mid-latitude cyclone. Early models suggested the storm’s minimum pressure — a gauge of its intensity — could be as low or lower than the infamous October 1962 Columbus Day storm, which was also the progeny of a typhoon. Finally, the local news media played its role in drawing considerable attention to the approaching storm. In the end, the actual storm’s peak intensity was lower than expected, and it rapidly swept through a narrow area away from population centers, minimizing the impact. In short, the dire forecast and the considerable attention it was given could be characterized as a false positive error.

The forecast for such a severe storm, drawn from the meteorological models, was perhaps not discounted appropriately considering the actual rarity of such powerful storms in the historical record. Meanwhile the news media appear biased toward giving attention to potential newsmaking events, rather than ignoring or minimizing them. But still, the biases that may have led to this false positive probably serve as a public good. It is almost certainly better to have children cooped up indoors when a violent storm is suspected of approaching, but never shows up, than to have those children playing outdoors amid violent gusts, falling branches, and crashing trees because meteorologists “appropriately discounted” the likelihood of their forecast based on the historical record. It is in this sense that we prefer false positives in forecasts for severe weather: better safe than sorry.

In the cases of criminal law favoring false negative errors, or severe weather forecasts favoring false positive errors, these biases are consistent with a shared moral and cultural framework. In other situations, however, the prevalent error bias may be morally neutral or even objectionable. One example that has come to attention recently is the so-called “replication crisis” in psychology.