Copilot, Not Autopilot: How Generative AI Augments, but Doesn’t Replace Active Management

AI, with its data analysis and predictive power, can revolutionize investing. However, humans remain a crucial part of the process. Franklin Templeton Investment Solutions provides use cases into how different investors can harness AI to achieve their desired outcomes and workflows.

Artificial intelligence (AI) is commonly defined as machines that mimic the cognitive functions of the human brain. For some cases, like playing checkers, where the rulebook is simple, this is a relatively low bar. Indeed, this was one of the first use cases of AI by Arthur Lee Samuel in 1952.1 However, the bar rises exponentially with every notch of complexity. It wasn’t until 1997 that Gary Kasparov would lose a full match to Deep Blue.2 It took nearly two more decades, even with all the exponential strides in computing power over that time, before AlphaGo beat Go grandmaster Lee Sedol in 2016.3 Thus, while advances in AI, including the ones we’ll discuss in this article, are expanding Earth’s collective cognitive ability, it is premature to seek shelter from sentient robot overlords or even fear that they’ll fully replace many knowledge workers, such as investment professionals.

Instead, with the advent of large language models (LLMs), which are deep learning algorithms trained on gigantic datasets, AI’s output can range from concise summaries to detailed insights. What may first come to mind is OpenAI’s GPT-3, of which ChatGPT is the result.4 GPT-3 was trained on nearly the entirety of the internet and most books.5 This gave its neural network 175 billion parameters,6 which it uses to opine on topics ranging from banal to sublime. With terabytes of training data, extensive power amplified by distributed computing, and some old-school human ingenuity, the applications of AI to many fields, including investing, will continue to rapidly advance. While many of these are beyond the scope of this introductory article, we present use cases of how AI can be harnessed by different investors to potentially improve their desired outcomes and workflows.

AI capabilities: Data analysis and predictive power

Distilling investing to an extreme, we could say it is determining the fair value of assets—based on analyzing as much public information as can be gathered—and then, if prevailing market prices differ from the results, buying or selling them. The sheer amount of relevant data is vast—financial documents, earnings transcripts, regulatory filings, news articles, day-long congressional testimonies, and nowadays even Reddit conversations and tweets. This data is noisy, non-normal and increasingly unstructured (that is, inherently difficult to analyze). LLMs can both consume and, critically, understand this data at rates eclipsing any analyst team.

A basic output of this task is the ability to summarize information for human consumption—whether it’s thousands of social media threads written in zoomer vernacular (no cap7) or the dense legalese of a corporate deposition (veritably8). Taking it a step further, AI can combine different data sets to extract insights not immediately apparent to even a seasoned human investor.

So, should we all retire and let the machines take over? Not so fast. When properly prompted LLMs are quick to offer answers with the confidence of an economist spouting talking points on TV. This is because LLMs are trained, on a Pavlovian level, to offer responses humans will trust. There is a reward function in most algorithms for providing acceptable answers. But is their confidence justified? This depends on many factors, and even if fed high quality data, deep learning algorithms are fallible. For example, transformer models (which construe most LLMs) can easily veer off track, or hallucinate, because they work by sequentially predicting the next most probable word in a sentence. This is an autoregressive process, where words the LLM generated itself are used to predict the next ones. While at first it sounds similar to how humans think—after all the words we say next are predicated on the ones that just left our mouths—LLMs have a much harder time realizing if they are talking nonsense. Recognizing when confident-sounding AI is abjectly wrong, phrasing questions for it with precision, fine-tuning its training, and feeding it the most nutritional data are all reasons for why humans remain a crucial part of the process. We offer practical examples in the world of investing below.