In the last 12 months, Generative AI/ChatGPT has piqued the interest of firms and regulators alike. While AI and associated technologies are by no means new, the innovative uses cases have created new opportunities and new risks within the investment advisory space.
While there are no active regulations, as of the publication of this blog, regulators have made their stance known with proposals, enforcements, and statements addressing their many and varied concerns.
In fact, in the press release for the SEC’s proposed rule regarding AI SEC Chair Gensler stated, “Today’s predictive data analytics models provide an increasing ability to make predictions about each of us as individuals. This raises possibilities that conflicts may arise to the extent that advisers or brokers are optimizing to place their interests ahead of their investors’ interests. When offering advice or recommendations, firms are obligated to eliminate or otherwise address any conflicts of interest and not put their own interests ahead of their investors’ interests. I believe that, if adopted, these rules would help protect investors from conflicts of interest — and require that, regardless of the technology used, firms meet their obligations not to place their own interests ahead of investors’ interests.”
However, while firms should be cautious in their usage, the benefits of AI bring with it a plethora of transformative and innovative solutions to many of today’s challenges.
Who is Using AI and Why?
During a recent webinar, we polled our audience to gauge their interest in AI, and their answer?
9.86% stated they are currently using Generative AI/ChatGPT for business purposes. And a whopping 46% stated they are looking into the possible business implications.
Given the murky (at best) outlook from regulators, compounded by recent enforcements regarding the use in marketing, it comes as no surprise that many firms have not made the leap into Generative AI.
But, as we continue to face a heightened degree of regulatory pressure, compliance requirements, and resource constraints, AI may become the go-to tool for automation and task transformation.
In a previous poll, our audience shared both their use cases and their concerns when it comes to AI.
Use cases:
- To prepare marketing/communications
- To use for innovation in the technology stack
- To make due diligence faster/more efficient
- To support deal sourcing, internal processes, and searches
- Internally to make regulatory reviews faster/more efficient
While concerns include:
- Potential compliance violations
- Technical limitations or errors
- Lack of accountability
- Overreliance on AI
- Ethical implications
- Job displacement
For many, AI represents the democratization of work. In fact a recent New York Times Article highlighted David Autor’s argument of that exact sentiment, highlighting “A.I. becomes not a job killer but a “worker complementary technology,” which enables someone without as much expertise to do more valuable work.
Early studies of generative A.I. in the workplace point to the potential. One research project by two M.I.T. graduate students, whom Mr. Autor advised, assigned tasks like writing short reports or news releases to office professionals. A.I. increased the productivity of all workers…”
While the future of AI may be unclear, one thing is certain, change is inevitable and for those who embrace innovation, this change could unlock new potential and new opportunities for growth and expansion.
Answering Your AI Compliance Questions
Navigating the intersection of innovation and risk mitigation is a core component of any compliance professional’s job. But, you don’t have to do it alone.
Enroll in our upcoming educational course on May 17 to explore how you can foster innovation while mitigating risks associated with Generative AI. By adopting a compliance-first mindset, compliance can facilitate rather than hinder innovation, ensuring that projects progress safely.