Let’s face it—we love exciting announcements. Why talk about the small technical improvements of a given artificial intelligence (AI) system when you can prognosticate about the coming advent of artificial general intelligence (AGI)? However, focusing too much on AGI risks missing many incremental improvements in the space along the way. This is very much like how focusing solely on when cars can literally drive themselves risks missing all the incremental assisted driving features being added to cars all the time.
DeepMind at the Forefront…AGAIN
The coverage of AlphaGo, DeepMind’s1 system that was able to best the performance of professional Go player Lee Sedol, was a game changer. Now there is AlphaZero, AlphaFold and more. DeepMind has made incredible progress in showing how AI can be applied to real problems. AlphaFold, for example, predicts how given proteins will fold, and, in accurately knowing the shape of given proteins with accuracy, unlocks enormous potential in how we think about all sorts of medical treatments.
The Covid-19 vaccine using mRNA was based largely on targeting the shape of the specific ‘spike-protein.’ The overall protein-folding problem was something humans were focusing on for more than 50 years.2
However, DeepMind recently presented a new ‘generalist’ AI model called Gato. Think of it this way—AlphaGo specifically focuses on the game of Go, and AlphaFold specifically focuses on protein folding—they are not generalist AI applications. In contrast, Gato can3:
- Play Atari video games
- Caption images
- Chat
- Stack blocks with a real robot arm
In total, Gato can do 604 tasks. This is very different from the more specialized AI applications that are trained with specific data to optimize one task.
So, AGI Is Now on the Horizon?
To be clear, full AGI is a significant jump over and above anything achieved to date. It’s possible that with an increase in scale, the path used by Gato could lead to something closer to AGI than anything done to date. Similarly, it’s possible that increasing scale alone goes nowhere. AGI may require breakthroughs that are yet not determined.
People love to get hyped on AI and its potential. In recent years, the development of GPT-3 by OpenAI4 was big, as was the image generator DALL-E. These were both huge achievements, but neither has led to technology exhibiting human-level understanding, and it is unknown if the approaches used in either will naturally lead to AGI in the future.
If We Cannot Say When AGI Will Come, What Can We Say?
While the massive breakthroughs like AGI may be difficult, if not impossible, to predict with certainty, the focus on AI broadly has been undergoing an incredible upswing. The recently published Stanford AI Index report is extremely useful, in that one can see:
- The magnitude of the investment pouring into the space. Investment, in a sense, partly measures confidence, in that there has to be a reasonable belief that productive activity will result from the efforts being funded.
- The breadth of AI activities and how the activities are universally showing improving metrics.
The Growth of AI Investment
Looking at figure 1, the progression of investment growth has been staggering. We recognize that this is partly driven by the excitement and potential of AI itself, and also by the general environment. The fact that 2020 and 2021 showcased such large figures has to be influenced by the fact that the cost of capital was minimal and money was chasing exciting stories with potential profits way out in the future. Based on what we know today, it would be difficult to predict that the 2022 figure outpaces 2021, but it could still be a strong absolute amount of money.
It's also interesting to consider the evolution of the components of investment:
- 2014 was defined by public offering, which in other years was generally on the smaller end of the spectrum relative to the totals.
- The primary driver of consistent growth in investment was on the private side, so it appears clear that figure 1 depicts the cyclical upswing in private investment, which we recognize may not necessarily continue a straight-line upward trend throughout the 2020s.
Figure 1: Global Corporate Investment in AI by INVESTMENT ACTIVITY, 2013–21
What Activities Is the Money Funding?
Aggregate investment amounts are one thing, but it’s more concrete to consider specific areas of activity. Figure 2 is helpful in that regard, providing a sense of the change in 2021 relative to 2020.
- In 2021, ‘Data Management, Processing, Cloud,’ ‘Fintech’ and ‘Medical and Healthcare’ led the way, each breaking $10 billion.
- It’s notable that in the 2020 (purple) data, ‘Medical and Healthcare’ led with around $8 billion. It puts the relative year-over-year increase for ‘Data Management, Processing, Cloud’ and ‘Fintech’ in more stark relief.
Figure 2: Private Investment in AI by Focus Area, 2020 vs. 2021
Is AI Technically Improving?
This is a fascinating question, the answer to which may have nearly infinite depth and may be covered in a limitless array of academic papers to come. What we can note here is the fact that it involves two distinct efforts:
- Designing, programming or otherwise creating the specific AI implementation.
- Figuring out the best ways to test if it is actually doing what it’s supposed to or improving over time.
I find ‘semantic segmentation’ particularly interesting. It sounds like something only an academic would ever say, but it refers to the concept of seeing a person riding a bike in a picture. You want the AI to be able to know which pixels are the person and which pixels are the bike.
If you are thinking—who cares if sophisticated AI can discern the person from the bike in such an image?—I grant you it may not have the highest application value. However, picture an internal organ shown on a medical imaging device—now think about the value of distinguishing healthy tissue from a tumor or lesion. Can you see the value that could bring?
The Stanford AI Index report actually breaks down specific tests designed to measure how AI models are progressing in such areas as:
- Computer vision
- Language
- Speech
- Recommendations
- Reinforcement learning
- Hardware training times
- Robotics
Many of these areas are approaching what could be defined as the ‘human standard,’ but it’s also important to note that most of them are only specializing in the one specific task for which they were designed.
Conclusion: It’s Still Early for AI
With certain megatrends, it’s important to have the humility to recognize we don’t know with certainty what will happen next. Within AI, we can predict certain innovations, be it in vision, autonomous vehicles or drones, but we must recognize that the biggest returns may come from activities we aren’t yet tracking. For those interested in an investment vehicle designed to gain exposure to AI, consider the WisdomTree Artificial Intelligence and Innovation Fund (WTAI).
Stay tuned for Part 2, where we discuss recent results of certain companies operating in the space.
1 DeepMind is a subsidiary of Alphabet. As of 23 June 2022, Alphabet was a 1.46% weight in WTAI.
2 Source: Heaven, Will Douglass. “DeepMind’s protein-folding AI has solved a 50-year-old grand challenge of biology,” MIT Technology Review, 11/30/20.
3 Source: Melissa Heikkila, “The Hype around DeepMind’s New AI Models Misses What’s Actually Cool About It,” MIT Technology Review, 5/23/22.
4 OpenAI has 0% holding in the WisdomTree Artificial Intelligence and Innovation Fund (WTAI).
© WisdomTree, Inc.
Read more commentaries by WisdomTree, Inc.