Treating generative AI as a trainable teammate could be promising for portfolio managers, but it may never replace the role of final human decisions
Each month at WP we offer a slate of articles and content pieces that go deep on a particular topic. This November, we’re exploring the uses of artificial intelligence in wealth management.
Paul Kornfeld thinks it’s a bit too simple to ask whether a generative artificial intelligence can become a portfolio manager. The Director of Technology Services at SIA Wealth Management thinks it’s better to ask how generative AI can contribute to portfolio management teams.
Acknowledging the fact that an independent AI could technically manage a portfolio, Kornfeld also recognizes the fact that nobody is truly a solitary portfolio manager, and that the work of a PM requires collaboration, whether with an administrative assistant, a research analysts, or another PM to bounce ideas off of. As the wealth management industry seeks new ways to apply generative AI to its processes, Kornfeld argues that treating an AI as a colleague and teammate is key to unlocking its utility.
“There are already cases of totally autonomous AIs with black boxes, and they've either failed a lot of times or succeeded in some ways. But that's not necessarily, I think, where the future is going,” Kornfeld says. “I think the real shift of thinking about AI is to see it as a teammate, as a co-worker, as a somebody that's going to do a task for you and can add value in that area. But to kind of pit human and AI portfolio managers against each other isn't, necessarily, my viewpoint of it. I don't think AI is going to replace the portfolio manager. I think it's going to redefine what the portfolio manager can do in a day.”
What Kornfeld envisions is a system where one PM is effectively running a whole team of AI agents, each with their own tasks, collaborating with each-other and the PM to reproduce the work of human PM teams. The human, with a greater capacity for reasoning than any AI large language model has yet demonstrated, remains in the driver’s seat. Key processes of research, analytics, and risk identification can be conducted by the AI agents. The scutwork of compliance filings and trade hygiene are also easily handed off to AI agents. Moreover, the speed with which these agents can gather information on key events like an earnings report might give the AI-human PM team an advantage.
For PMs and advisors considering adding AI agents into their workflows, Kornfeld stresses that framing them as colleagues is key. As with any colleague, you have to train them, anticipate mistakes, give feedback, and correct them. The promise of AI is not in its ability to conduct each task perfectly on the first try, but in its capacity to learn fast and adapt to feedback. Expecting perfection, he says, is unrealistic and will lead to disappointment. Putting in the work to train and sculpt an AI that a PM can trust to do their work will pay dividends.
Part of that feedback work, Kornfeld notes, is establishing essential guardrails and security controls for an AI tool. Oversight is essential for success. For example, PMs must clearly delineate between the exploration of a hypothetical trade in a research request and the execution of a real trade. Doing that feedback work may mean that adding an AI tools is actually detrimental to efficiency in the short-term, much as the way adding a new inexperienced teammate can be. Just as a new graduate may be turned into a talented PM, an AI tool requires input and feedback to be made valuable and additive to efficiency.
Kornfeld cites his own former professor at Stanford University Jeremy Utley, who teaches on AI and design thinking. Utley has long promulgated the idea of viewing AI as a teammate and reframing the adoption of AI as man vs. machine into a narrative of collaboration and specialization.
For advisors and PMs who are considering the use of AI, Kornfeld suggests beginning with conversations. Using these tools in their client interactions, he says, can both give the tool a better grasp on an advisor or PMs work, and offer utility as the LLM teases out key insights from the conversation. At the same time, he acknowledges that a degree of caution and wariness will be required to ensure these tools are reliable. Moreover, when handling sensitive data and specific forms of work Kornfeld argues that building LLMs from the ground up can be far more effective than using open models like Chat GPT, which can be coloured by the sheer volume of unspecified inputs they receive. Just as with a human colleague, you should speak on privileged topics with colleagues within your organization and never expect 100 per cent accuracy.
“We don't expect that with our own co-workers and our own teammates, we don't expect them to be 100 per cent. We expect them to take constructive criticism and not make the same mistake again. I think our expectations should be the same with AI,” Kornfeld says. “And as technology gets better, those quality assurance measures will remain essential. You still need to cite your sources…But, for some reason, in finance and portfolio management we want that holy grail. We want that AI that's going to create the perfect portfolio strategy, but I think that back and forth interaction needs to be a part of it.”