Two years ago, the Journal of Finance — the most prestigious journal in the field — retracted a published paper because of data errors, either the first or second withdrawal ever by a top-three finance journal. Now the Journal of Financial Economics, another influential publication, has made its own retraction. This is causing some people to declare a replication crisis in finance — missing the point entirely.
The retraction of the 2019 Journal of Financial Economics paper, “Common Risk Factors in the Cross-Section of Corporate Bond Returns,” is actually a story of science working. This is particularly true for the nascent field of research on the corporate bond market, where data is far murkier than in most other markets.
Let me take a few steps back to explain.
In 2005, Stanford professor John Ioannidis published his seminal paper, “Why Most Published Research Findings Are False.” This does not mean most academic authors are dishonest, although dishonesty is the problem for some papers. The larger issue is that few academics can publish the quantity of high-quality research that promotion and tenure committees demand, not to mention what university public relations departments encourage.
That leads to massaging data, misrepresenting results, exaggeration, rushing to publication with insufficient vetting and reworking the same research in multiple papers. Peer review makes things worse, enforcing conformity rather than auditing results and ensuring quality.
Nevertheless, in most fields, knowledge progresses despite the many false papers for two main reasons. First, most published research findings are ignored and, therefore, do little harm. Second, the research findings that attract attention are checked both by people trying to replicate the original claims and by those doing follow-on work with different data and methodologies.
Only when a broad consensus of researchers agree is a finding taken into the conventional wisdom of the field. Reputation comes not from the quantity of papers published but from whether publications prove enduring and reliable.
There are many exceptions, both specific false findings that live on like horror movie villains and entire fields that succumb to nonsense, but in most fields most of the time the rickety system works.
The retraction of the Journal of Financial Economics paper needs to be considered in this light.
The paper claimed to find three reasons, called “factors” in the literature, that explained why corporate bonds issued by similar companies with similar maturities performed differently — credit risk, liquidity risk and downside risk.
The reasons were not surprising. Everyone knows that investors demand higher expected returns for bonds with more credit risk and less liquidity, and downside risk has been found to require extra expected returns in many contexts. The paper attracted attention because of the size and certainty of the results and the claim that they outperformed competing models for expected returns on corporate bonds.
This is precisely the kind of paper that draws attention. The results were consistent with conventional wisdom on the subject but sufficiently strong to clarify thinking about a subject and offer opportunities for improved portfolio management. Papers that challenge conventional wisdom or muddy the waters are more often ignored.
When people tried to replicate the paper’s results, or test them with different data sets or approaches, or apply them in practice, they failed. This is common and rarely leads to retraction. Only because investigators looked at the data and code supplied by the paper’s authors and found misalignments in the data was a retraction necessary.
The paper also attracted skeptical attention because it combined two problematic areas of research in finance: factor models and corporate bond market data.
While factor models are among the most important innovations in both the theory and practice of finance in the last 30 years, they have given birth to a “factor zoo” populated with hundreds of documented factors, most of which prove ephemeral. Some are fortuitous correlations that do not persist, others are renamed minor variants of known major factors, still others come from misrepresented or misleading data.
Additionally, the quality of data for the corporate bond market is much lower than that for equities, futures, currencies or top government bonds. Issuers may have hundreds of bonds, and each debt instrument has unique features — maturity, coupon, security and terms. Most bonds trade infrequently, and price information is noisy. When companies run into trouble, there can be months or years of uncertainty with partial payments made at different times, making it hard to define the return to holders.
The modern literature on corporate bond factors began less than 20 years ago, soon after the Financial Industry Regulatory Authority introduced in 2002 its system for reporting trades, the Trade Reporting and Compliance Engine, or Trace. Despite all the false reports since, we know much more about corporate bond factors today than before Trace.
This retraction should improve researchers’ attention to data details and speed the construction of common corporate bond data sets maintained by data professionals so researchers can concentrate on finance rather than on data cleaning and alignment.
A recent paper, “Corporate Bond Factors: Replication Failures and a New Framework,” found that only 27% of the most commonly cited corporate bond factors could be reproduced using out-of-sample data (that is, data other than what was used in the original papers) and that, “the corporate bond literature is based on data full of errors.”
Yet, this is not a field in crisis. There are important questions that honest researchers are struggling to address despite challenges of data and theory. Vigorous debate exposing errors is a sign of health, not disease.
A message from Advisor Perspectives and VettaFi: To learn more about this and other topics, check out our most recent white papers.
Bloomberg News provided this article. For more articles like this please visit
bloomberg.com.
Read more articles by Aaron Brown