June 2023

ONE MORE THING...

  • Market timing is hard

  • “Don’t fight the Fed. Unless…”

  • “I don’t know that he has any real friends”


Questions about AI in investment management

Artificial Intelligence has widely been credited as the driving force behind the recent market gains (Nasdaq Set For Best First Half In Its 52-Year History Amid AI-Fueled ‘Market Euphoria’).  ChatGPT, with its record-breaking 100 million monthly active users just two months after launch(!!), and other “Generative Pre-Trained Transformer” (GPT) technologies offer promise for virtually every aspect of business, investing, and life.  

AI has dominated the financial media for most of this year.  So rather than adding to that deluge, here are some questions I will be looking to answer as the adoption of GPT technologies takes root.

1. Can GPT technologies ever be truly predictive? 

Experienced investment managers know the perils of backtesting.  Overfitting is bad, in particular mistaking noise for signal.  There is always a correlative explanation for prior data - but unless there is a causal explanation, backtested theses are unreliable at best. 

ChatGPT was trained on data through September 2021.  Are ChatGPT’s explanations of “old” data likely to persist into the future? 

Framed another way:  Markets are necessarily forward-looking machines, so will GPT technologies that are built on the past really be able to offer predictive value about the future?

2. Can GPT technologies pass the Turing Test?

It’s time for AI chatbots to put their money where their mouth chip is.  

The Turing test, named after Alan Turing, is a test whether a computer’s output is indistinguishable from a human.  A common application of this test is with customer service chats on websites - in holding a conversation with you and (hopefully!) resolving your issues, are you able to tell whether you were talking to a human or a bot?  If not, then that bot passed the Turing test.

Extending the Turing test to investment management, will GPT technologies be able to replace financial advisors, both in the quality of the financial performance as well as with trusted interpersonal communications?

This question arose many years ago with Robo-advisors of course, and the technology behind Robo-advisors has significantly improved in this time also.  Will GPT technologies improve similarly - and beyond - to the point where robo-conversations are just as natural and clients trust their Robo-advisors just as much as they trust their human advisors?  

3. Can GPT technologies beat the market over long time horizons?

Failing to outperform the S&P 500 Index has been the bane of the active management industry, evident in the continued flows out of actively managed funds and into passively indexed funds (ICI Factbook 2023, page 48):

 
 

Of course investors are already trying to use ChatGPT to beat indexing.  It’s early, but here’s one such effort Opinion: This AI-powered stock portfolio beat the S&P 500 and left market pros in the dust.  Not surprisingly, this was over a mere 8 week window(!!).    

If the evidence becomes clear that GPT technologies can in fact beat indexing over long time horizons, and subsequently everyone starts buying the same GPT-recommended stock picks, would the same well-known forces of market inefficiencies take root and bring returns back to efficiently priced equilibria as reflected in the indices?

4. What do GPT technologies mean for market efficiency?

I have argued that markets are “mostly” efficient, which I have aptly named the “Mostly Efficient Markets Hypothesis”.  It seems reasonable to say that financial markets have never been more efficient given the scale of the internet, widely available financial data, virtually frictionless trading platforms, and regulations promoting transparent and fair market operations.  How, then, will GPT technologies squeeze out even more market inefficiencies from this metaphorical turnip?

The technology is very promising!  For example, Hedge Funds Are Deploying ChatGPT to Handle All the Grunt Work:

“Fed researchers found [ChatGPT] beats existing models such as Google’s BERT in classifying sentences in the central bank’s statements as dovish or hawkish.”

This seems like an obvious upgrade to existing technologies that rapidly analyze the Fed’s meeting minutes for changes and actionable insights the instant those minutes are released.  Market inefficiencies do not have to be large to be profitable, so corporate annual reports, Fed meeting minutes, and countless other documents seem particularly fruitful places for applying GPT technologies in market-efficiency-improving ways.

5. What happens when AIs trade with AIs?

High speed trading algorithms trade with each other in one 64 millionth of a second, and an estimated 50% of total equity trading volume comes from HFT.  But the innovation of GPT technologies is not even-faster trade execution, it’s in natural language processing.

So what happens when one AI produces output that another AI uses to place a trade?  And what happens when one AI learns that it can produce output to manipulate other AIs in favorable ways? 

ChatGPT has received widespread attention for producing “fake” content (like judicial opinions!) that looks sufficiently real so as to have been produced by a real life human.  Now imagine this at scale, with fake press releases and leaked insider documents and spoofed tweets and deepfaked testimonials and ____ and ____ and ____ … 

Prior to ChatGPT, the ever-more-rapid production of information has led to estimates that 90% of all data in history was produced in the past 2 years.  Golly, what will that percent be when GPT technologies take root??!!! 

If any of this catches your eye, definitely read Matt Levine’s AI vs AI segment.

6. Will AI eat itself?

This is not my own original question - it comes from this paper AI Will Eat Itself?.  The concept is that AI models will collapse as they learn from data generated by other AI models.  One of the more troubling findings is that “over successive generations, acquired behaviors converge to an estimate with extremely minimal variance and how this loss of knowledge about the true distribution begins with the disappearance of the tails.” 

The “disappearance of the tails” is particularly important because investment managers must successfully manage tail risk.  Investment managers will be taking unknown risks if they use GPT models without scrutinizing the validity of the source data.  And if AIs collapse, so too will our systems built on those AI systems.


ONE MORE THING…

The information and opinions contained in this newsletter are for background and informational/educational purposes only.  The information herein is not personalized investment advice nor an investment recommendation on the part of Likely Capital Management, LLC (“Likely Capital”).  No portion of the commentary included herein is to be construed as an offer or a solicitation to effect any transaction in securities.  No representation, warranty, or undertaking, express or implied, is given as to the accuracy or completeness of the information or opinions contained herein, and no liability is accepted as to the accuracy or completeness of any such information or opinions.  

Past performance is not indicative of future performance.  There can be no assurance that any investment described herein will replicate its past performance or achieve its current objectives.

Copyright in this newsletter is owned by Likely Capital unless otherwise indicated.  The unauthorized use of any material herein may violate numerous statutes, regulations and laws, including, but not limited to, copyright or trademark laws.

Any third-party web sites (“Linked Sites”) or services linked to by this newsletter are not under our control, and therefore we take no responsibility for the Linked Site’s content. The inclusion of any Linked Site does not imply endorsement by Likely Capital of the Linked Site.  Use of any such Linked Site is at the user’s own risk.

Previous
Previous

July 2023

Next
Next

May 2023