The Tricky Tightrope of AI in Wealth Management
As part of Solutions Review’s Contributed Content Series—a collection of contributed articles written by our enterprise tech thought leader community—Ryan Pannell, CEO and Global Chair of Kaiju Worldwide, debunks some common fears people have regarding AI in wealth management.
This concept of allowing autonomous systems to direct investments is not new. What is new is that what used to happen quietly behind closed doors, magically understood by only a select few whose acumen reliably produced dizzying returns year after year, is now under the bright light of day—with all the professional, amateur, and media scrutiny that comes with that sun. Now, we have a mosh-pit of players, all jockeying for positions of credibility and relevance in a new space whose fundamental technologies are still pretty dark from an explainability perspective.
Investors are crowded around the starting line, some running early because they thought they might have heard the starter’s pistol, and others false-starting because they believe the pistol will fire any second now. Still, others are slowly putting their cleats on because they’re not sure this is even the race they’re supposed to be running. While a sprinting analogy might seem a bit facile, this is where we are now: at the beginning of a new Great Race. But to best understand how this race is run, it’s essential to look at why the confusion at the starting line exists in the first place, and doing that requires investors to glance back.
For decades now, AI has been part of our shared social consciousness. It’s been featured in books and movies; when it is, it’s always the Antagonist. It can be accidentally so, as the child-like Joshua was in 1983’s WarGames, after a bored teenager who’d hacked it asked it to play what he thought was a game named “Global Thermonuclear War.” Or it can be more malevolent, like the ever-subconsciously present Skynet from Terminator. What our collective consciousness has taken from the myriad examples of “AI run amuck” is two-fold: 1) AI is incredibly powerful, and 2) AI is a threat.
Given our shared mythology, it’s easy to see why we struggle with the decision to hand the reins (and our wallets) over to an autonomous system. The greed centers of the brain recognize the unquestionable power of AI, while the fear centers are still connected with the threat promised by our myths. The result is the turmoil at the starting line.
The way forward is relatively clinical for investors deciding when to join this race. It’s the answer to a simple question: what are you comfortable with? If the entire concept of AI disquiets the investor, there’s no need to seek out mechanisms for exposure. Regardless of what they do, all global companies either are or will shortly employ AI within their business models. As such, the hesitant investor will gain access to and benefit from AI’s substantial value proposition, whether they like it or not.
For those who have more faith in the promise this technology might bring but aren’t quite ready to put their capital pool directly into its hands, there is the opportunity for thematic exposure. Companies like Amazon, NVidia, Microsoft, Google, and Apple are all investing billions of dollars into trailblazing novel new uses for AI while simultaneously offering extremely conservative investment profiles. They are huge, well-run (generally), and considered by global market participants as “too big to fail.”
One step down in size but several steps up in niche are ETFs, which offer thematic exposure directly to many of these companies—plus some smaller AI-only companies—in a dynamically managed basket of opportunity. These ETFs will present a more aggressive volatility profile, given the inclusion of the smaller companies and the singular focus on AI. Still, they represent a reasonable mid-point between “I’m not comfortable with AI, but I know it’s here to stay” and “I am excited about AI, but I don’t trust it enough to make endpoint decisions yet.”
AI is the final (and newest) frontier in endpoint investment decision applications. This is where we live at Kaiju and what I am most qualified to speak about. If I could bring three key understandings to those investors in the first two camps to ease fears and build confidence, it would be these:
1) AI is only capable of making decisions we allow it to
To take the above statement one step further, AI doesn’t even know it can make decisions we don’t make it aware of. This is the first critical misunderstanding that leads to investor disquiet: that the AI system’s autonomy can breach the Control Zone. It can’t. We tell it what the markets are and how they work, and then we determine the system’s level of connectivity. If a system is programmed to trade equities, it can no more decide to trade options than it can choose to stop trading altogether and sing in Spanish.
We tell the machine what the strategies are and where the hard landscape of their boundaries ends. The AI then executes our vision, and the reinforcement learning component—that is, the ability to self-alter its codebase independently—allows only refinement of the existing system, not the creation of, or migration to, something entirely unrelated.
2) There is a huge difference between Black Box and Responsible AI
Black Box, as the term is used outside the actual discipline, refers to systems whose entire decision tree cannot be explained (as opposed to just the endpoint). An example would be an autonomous system connected to capital markets and told by its creators only to “go make money.” With that loose guidance as the sole goal, the system could drastically exceed risk controls a human would not, ignore trends it didn’t find reliably meaningful, and basically trade like a drunken roulette addict in Vegas.
Black Box systems can be highly profitable, but it’s important to note that the only capital successful companies tend to allocate toward this fund is its own. No outside capital; very high-risk appetite. That might be suitable for a private fund or family office with deep pockets, but it’s not generally suitable for public use.
Systems that use Responsible AI, on the other hand, use the AI’s key strengths—heavy filtering and culling of massive data sets, the ability to recognize changes in patterns in real-time, and decision-making capabilities on the nanosecond timescale—to refine and theoretically “perfect” strategies incepted by humans. They follow the rules set by their creators and have only the autonomy to work and grow within the box they have been placed in. This is what we use at Kaiju, which tends to keep investor capital safe while “letting AI be AI.” If you want to be sure that your capital is being managed prudently, ask the manager what flavor of AI they’re using.
3) Sentience is a long way away
In the AI of our nightmares, there is an “awakening” where the AI becomes self-aware, understands that it’s being tasked by humans to do things it might not want to, and decides not to play the game anymore. This theoretical evolution is nothing but a fantasy; no path leads there. The AI would have to become globally aware of its existence within a human framework of emotions to react as it reacts in science fiction.
Were it to ever become aware, what evidence is there that its awareness would mimic our own? As a machine, it cannot feel pain or pleasure; it has no emotional response. So it could not perceive that there might be a “more” out there for it, that if there was, it would be in any way desirable, nor that it was disadvantaged by the tasks we ask it to perform. As close as we will ever come to sentience, it may someday become very good at predicting what we want from it with such precision that we confuse its ability to anticipate with sentience when, in fact, it will have learned our patterns and the programs whose rules we run by. That’s it; that’s as far as this road goes. There’s nothing to be scared of so long as we program responsibly.
Ultimately, there are many reasons to be optimistic and cautious about AI’s future as a financial decision-making mechanism. It is infinitely more powerful than we are or can ever be regarding data analysis and pattern recognition, calculation speed, and probabilities-based decision-making. But AI is only a machine at heart and will always depend on us for guidance and the initial spark of imagination from which all great and powerful things grow.
If it ever does become ‘The Greatest Investor of All Time,’ it will be because we trained it, shaped it, and then let it grow along the guy wires we planted, and it ultimately learned to make fewer mistakes than we did. The journey from “now” to “there” should be fascinating to watch.