Blog Article

The top four AI security concerns for registered investment advisers

May 30, 2023

Explore the top four AI security concerns for advisors, as well as insights from industry experts about the future of artificial intelligence.

The idea of artificial intelligence (AI) dates back to at least the 1950s, including when the famous mathematician Alan Turing posed the question: “Humans use available information as well as reason in order to solve problems and make decisions, so why can’t machines do the same thing?”

Since then, the world of machine learning and artificial intelligence has grown exponentially. And when the latest iteration of ChatGPT took center stage in late 2022, uses for artificial intelligence popped up in nearly every area of life.

People seem to be using it for everything: crafting short stories, asking for jokes, proofreading articles, writing emails – the possibilities seem endless. You may even have tried using it for your firm’s daily tasks, like writing first drafts of emails or organizing notes.

It’s an exciting time, but those new technologies also come with new risks.

In today’s blog, we’ll look at the top four AI security concerns for registered investment advisers (RIA), as well as what the experts have to say about the future of its use in the financial industry.

SEC guidance on artificial intelligence

To put it simply, the Securities and Exchange Commission (SEC) plans to closely monitor and regulate AI in the industry. In an April 2023 letter written from the Investor Advisory Committee (IAC) to the SEC Chair Gary Gensler, the committee stated:

“The SEC needs to continue to add staff with AI and machine learning expertise in the Divisions and Offices, including FinHub, as the use of these technologies continues to proliferate with investment advisers.”

Back in 2022, Gensler spoke at an AI Policy Forum hosted by MIT to share his thoughts and concerns surrounding AI technology.

“I think that we’re living in a truly transformational time,” Gensler said. “Every bit as transformational as the internet, but it comes with some risks.”

AI security concerns for registered investment advisers

While the entire breadth of AI security concerns is still an unknown, there are a few that should be on every adviser’s radar – including privacy issues, biased algorithms and more.

1. Privacy issues

The basis of artificial intelligence is continuous learning – as the technology provides services, it takes in new information for future reference.

This raises a few questions, chiefly that of data privacy – including whether AI platforms are saving confidential client information, and if so, how that information is protected from cybersecurity breaches.

Click here to download our complimentary Six-Step Cybersecurity Incident Response Plan for RIA Firms

“You’ve got all these employees doing things which can seem very innocuous, like, ‘Oh, I can use this to summarize notes from a meeting,’” said Steve Mills, the Chief AI Ethics Officer at Boston Consulting Group. “But in pasting the notes from the meeting into the prompt, you’re suddenly, potentially, disclosing a whole bunch of sensitive information.”

Until data privacy concerns are properly addressed, and compliance legislation is adopted, it’s best for firms to set clear policies for their staff on what is and isn’t appropriate use for AI technology.

Related: How to find the best-fit compliance management technology for your firm

2. Biased algorithms

In that same AI Policy Forum, Gensler spoke about his concern for AI-driven algorithms that prioritize company goals (i.e., profits) over client privacy protections.

Can AI provide advice that is truly in the clients’ best interest? Does advice taken from an AI qualify as fiduciary advice? The question of whether it can – and how regulatory bodies can properly test those algorithms – remains unanswered at this point.

3. Predictive analytics

Likewise, Gensler warned against AI’s potential to overtake predictive analysis.

AI models have the ability to intake large amounts of data and recognize trends instantly – but an overreliance on those predictions could be detrimental to both advisers and clients.

For example, Gensler notes that “there’s a risk that the crisis of 2027 or the crisis of 2034 is going to be embedded somewhere in predictive data analytics.”

4. Risks of cyber attacks

Lastly, there is a risk of cyber-attacks coming from hackers who successfully breach the AI technology’s defenses.

Related: What is an RIA Firm’s Best Defense Against Cyber Attacks?

In a recent Wall Street Journal article, Eric Goldstein (Executive Assistant Director for Cybersecurity at the U.S. Cybersecurity and Infrastructure Security Agency) noted that AI technologies intended to test cyber intrusions could be used maliciously.

Even with these cybersecurity concerns, it seems regulatory bodies and tech companies alike are pushing forward to find ways AI can bring efficiency to the industry. In the meantime, it’s best to have these security concerns on your radar and address AI usage within your company pre-emptively to avoid compliance errors.

Learn more with RIA in a Box

We’re here to help your firm harness new technology while staying compliant – and keep you up to date on changing legislation regarding artificial intelligence. Click here to connect with a member of our team and get started today.