—
Artificial intelligence as a risk or a reward for compliance
How are businesses approaching AI in compliance now, and in the future?
Artificial intelligence (AI) has long been on the compliance agenda. The launch of ChatGPT in November 2022, followed by other significant generative AI platforms built on top of large language models (LLM), saw mass accessibility for AI, both for compliance teams and individuals within organizations.
2023 was a year of reckoning for AI and generative AI, with regulators, governments, and financial institutions attempting to assess its merit and develop guardrails. Many financial organizations initially sought to ban ChatGPT, with Bill Gates declaring it “the most important advance in technology since the graphical user interface.” X founder Elon Musk called for an “immediate pause” to training generative AI, while some technology and compliance vendors rushed to integrate it into their technological offering.
Regulators are progressively creating legislation or guidance regarding compliant approaches. With this in mind, this year we asked respondents their sentiments regarding AI and compliance.
Only 10.4% of respondents categorically assess artificial intelligence to be a reward for compliance teams, while 17.4% believe it to be a definite risk. 32.2% opined that AI offers both risk and reward, while 40% of respondents chose instead to tell us their opinions on AI by selecting “Other.”
21% of respondents who chose “Other” opined that it is too soon to make a decision about AI, or that they had no intention of engaging with it.
There’s no doubt that AI does offer reward, but it’s not instant or an overnight thing.
The only way firms are going to see the reward of AI is where they focus on data as a first priority. Financial services’ data is often all over the place – there’s no structure to it, there’s no clarity – and firms won’t see good outcomes with AI unless they manage that data first. From conversations with various financial institutions, it became clear that unstructured data from multiple platforms is their main blocker for a successful AI implementation.
At Global Relay, we’ve spent years capturing and storing data, and structuring it so that it’s easy for firms to search and draw insights from. This means that when we introduce AI, it can very quickly return accurate results instead of reams of false positives.
Firms can hire big teams of data scientists and AI professionals. But unless they start with data, and format it correctly, that would be wasted resource. If you want AI reward, invest the time into structuring your data first.
Robert Nowacki, Technical Account Manager & Communication Surveillance SME, Global Relay
To better understand evolving approaches to AI, we asked respondents whether they intend to introduce AI into compliance workflows in the next 12 months.
While 42.6% of financial services said that they will be looking to integrate AI into compliance over the course of 2024, a greater number (57.4%) do not intend to.
These numbers provide further insight when broken down by regional respondents, as the figures are heavily skewed by an apparent reluctance to embrace AI by North American organizations.
The industry is waiting to fully embrace any solution that really and competently does provide a dramatic reduction of the industry problem across both trade and eComms surveillance, the false positive.
With the adoption of AI, one would expect to see an increase in regulatory expectation in terms of fewer, more precise alerts that in tandem will increase the amount of alerts to be properly investigated.
In more general terms, a harsh reality for many senior compliance officers is facing board-level colleagues and justifying what, on the superficial level, is little return on their equity or outlay and future demands for budget. Monthly MI can sometimes make bleak reading when one compares the number of alerts processed against any real or potential misdemeanors. Of course, we do not want to see large amounts of misdemeanors involving our own staff but sometimes management wants to see value for money.
Why is it too soon? I think there are many societal considerations to be addressed before wholesale adoption in a rapid timeframe.
The banking sector is currently undertaking one of its periodical staff reduction phases coupled (again), with the talk of pan-European mergers of big banks which only adds to the job loss concerns.
The specter of AI adds to the social concerns of mass job losses, and this will provide a drag on rapid acceleration. There are also key privacy concerns to be addressed, be it basic rules around worker and client protection or use of information against clear boundaries. If AI will be as powerful and efficient as we are led to believe, then those clear boundaries have to be established and be strong.
For compliance, the outcome will be good (eventually) but there will be cost. It will be slow creep until one organization goes ‘all in’ and succeeds in a 90% or more reduction in false positives, and gets better results with 90% less staff. At that point, the situation will change rapidly.
Martin Gaterell, Associate Director: Private Side Advisory with Monitoring & Surveillance, Unicredit GmbH
Roughly 70% of EMEA-based and global respondents expressed intent to integrate AI into their compliance workflows in the next 12 months.
This encapsulates AI-driven tasks across the gamut of compliance: from more efficient alert management and regulatory change management, to AI-powered surveillance.
North American respondents, on the other hand, show a markedly different attitude to AI adoption, with only 34.1% planning on integrating AI solutions through 2024.
Clearly, U.S. financial organizations have reservations about introducing AI practices into sensitive, oftentimes vulnerable, compliance programs. This may in part be a trickle-down effect of a cautious approach from U.S. regulators.
Thus far, the regulatory approach in the U.S. has been measured and chiefly focused on risk. The CFTC, for instance, has declared it is “technology neutral” and focusing on AI evolution – particularly in relation to fairness, transparency, safety, security, and explainability. During the CFTC’s “AI Day,” the National Institute of Standards and Technology (NIST) Chief AI Advisor said:
In order to be able to improve the trustworthiness of the AI system – the safety, the security, and the privacy – you need to know what they are… and how to measure them.
Chief AI Advisor, National Institute of Standards and Technology (NIST)
SEC Chair Gary Gensler appears to have endorsed an “approach with caution” ethos, while the Biden-Harris administration has released an Executive Order on the use of AI to increase transparency and accountability related to the morphing technology, while laying the groundwork for defined governance. While all approaches focus on risk, there is not yet one unified message or clarity of approach.
In comparison, regulators across EMEA are taking one of two approaches: tackling the issue head-on, as seen in Europe, or adopting the more relaxed line U.K. regulators seem to be taking.
Turning first to the U.K., Jamie Bell has said that the FCA aims to be “an enabler, not a blocker” to AI growth. The FCA’s latest AI Update noted that:
Many risks related to AI are not necessarily unique to AI itself and can therefore be mitigated within existing legislative and/or regulatory frameworks. Under our outcomes-based approach, we already have a number of frameworks in place which are relevant to firms’ safe use of AI.
FCA
The U.K. approach therefore appears to be a consideration of fitting new risk into existing regulation. Europe’s approach is far different. It has enacted landmark rules on artificial intelligence which will enter into force in June 2024. It might be the case that the clarity of approaches across EMEA, though different, contributes to an overall confidence in the implementation of AI. Whereas a lack of clear guidance and a cautious approach in North America may be setting the tone across the region.
The jury is still out in the U.S. regarding the efficacy of AI in financial services compliance. Before U.S. financial services firms fully embrace AI to assist with compliance, these firms will need to see definitive data that demonstrates that AI is in fact assisting in reducing the compliance burden.
These firms are also waiting on clear direction from regulators (SEC, FINRA, etc.) regarding recordkeeping requirements if AI is utilized. Firms realize that if they use AI, they need to be able to explain to the regulators what goes on inside the AI algorithms.
Once U.S. financial services firms can clearly see the benefits of AI in the compliance space, and regulators have clarified recordkeeping and audit trail requirements, I think we will see adoption of AI in the U.S. financial services industry increase exponentially.
Chip Jones, Executive Vice President, Compliance, Global Relay
“AI could be a useful compliance monitoring tool. It is higher risk when deployed to investment professionals.”
Chief Compliance Officer, Private Equity, North America
“I see that AI may help, however it is only as good as the information that it is provided. For instance, I may not be able to drill down into E.U. sanctions set in 1985 because the information has not been provided up through that year.”
Registrations Compliance Manager, Investment Bank, Global
“It is inherently combined in so much of what we do already electronically that we have to make space for it. But be intelligent in our use and policy making surrounding it.”
Senior Associate, Financial Services, North America