Victor Haghani, James White and Jerry Bell of Elm Partners Management conducted a fascinating experiment to investigate the value of getting tomorrow’s headlines today, and the trading acumen of finance students. The results may suggest to some that finance courses need an upgrade, but to me the study is interesting because of what it says about good decision-making.
The researchers picked 15 Wall Street Journal headlines from the last 15 years, blacked out the security price returns, and had 116 finance students trade $50 of real money, betting on stock and bond index returns over the day before the headlines. You can play the game yourself — but no real money — here.
You may remember a 2016 experiment by Elm Partners in which 61 finance students were allowed to make real-money bets on biased coin flips. The headline message of the earlier study and this one is the same: Finance students are poor traders. Despite having tomorrow’s headlines to shape their trading, only 62 of the 116 made any money at all, and the median profit was only $1.11.
Perhaps you think that this is not the fault of the players. Maybe having tomorrow’s headlines isn’t useful for trading. But Elm Partners asked five experienced traders to play, too. They all made profits of $10 to $230. I played myself and made $138 (play money in my case), better than all but four of the students. But this massively underestimates the value of tomorrow’s headlines. Real traders spend a lot more time and energy constructing and managing their trades than anyone put into this game. They understand the news in context, what the market was focusing on, and what it was expecting. Moreover, they’re not making one-day bets on stock or bond indices, they’re constructing sophisticated trades using multiple instruments to make surgically precise bets over multiple time horizons. If a decent trading shop, or even a successful home day trader, got the day-ahead Wall Street Journal with prices blacked out, it would soon have all the money in the world.
My first question was, were the students able to extract any useful information from the news? Out of 2,067 trades by the 116 players, 1,065 made money while 1,002 lost. While that’s better than 50%, it could easily be the result of random chance. It does not meet the conventional 5% threshold for statistical significance.
The students had two more slight edges, both also not significant statistically. The 1,065 trades they got right made an average of 1.25%, while the 1,002 trades they got wrong lost only 1.19%. They bet an average of 11.9 times capital on their winning trades, versus 11.2 times capital on their losing trades. But even combining the effect of all three edges, the students did not demonstrate statistically significant skill. In other words, it’s plausible that their trading was random.
There’s slight good news if we look at the students individually. Sixteen of them had profits that were significant statistically at the 5% level. We’d expect six (5% of 116) to achieve that by random chance, so there appear to be some students with skill. The best performance by any student is one that would happen only one time in 157 by random chance, so it does not seem like we’re looking at the results of a coin-flipping contest among 116 random guessers.
What about the wisdom of crowds? Even if individual students have only a slight ability to beat the market given the headlines, perhaps we could use their consensus trades to make profit. This produces a remarkable pattern. Out of the 15 days, three produced the strongest consensus — large majorities of students trading in the same direction, large average bet sizes. All three days were massive money losers for the group, including the two biggest loss days.
On six days there was no statistically significant consensus at all, five were money-losing, but the group losses were small because the bet sizes were small, and many students were on the right side of the trades. The six days in between, with clear-but-not-overwhelming consensus, were all big wins for the group. On the six good days, the group had a 33% total return, on the other nine days it had a -20% return.
This experiment involves the intersection of two academic fields — combining opinions into optimal forecasts, and the relation between news media and financial markets. Philip Tetlock is the leading theorist in the former, and his son Paul Tetlock is prominent in the latter. Drawing on their work, plus my own experience, I find the results explainable.
There really is wisdom in crowds. A solid-but-not-overwhelming majority opinion of independent analysts is surprisingly reliable even when the individuals seem to have barely better than random prediction ability. Overwhelming consensus, along with the high certainty of individual experts, should be viewed with caution.
My pop psychology explanation for this observation is that good judgment requires listening carefully to many quiet voices — both inside your own head and among groups. The majority of these voices is right more often than not. But the loudest voices — such as fear and greed inside your head, and aggressive, opinionated people in groups — are unreliable. When loud voices dominate, you get strong consensus for bad ideas. That’s true in your head — when you’re dead certain of something it’s time to step back and reconsider. Your best decisions will feel right but come with a realistic appreciation that they might be wrong. It’s true of groups — quiet consensus among independent thinkers, with minority opinions respected but overruled, are better guides than blind enthusiasm.
A message from Advisor Perspectives and VettaFi: To learn more about this and other topics, check out some of our webcasts.
Bloomberg News provided this article. For more articles like this please visit
bloomberg.com.
Read more articles by Aaron Brown