Trustworthy AI in Financial Crime Compliance

AI and Financial Crime Podcast John Cusack Interview

Preventing, detecting and reporting financial crime is arguably one of the most important roles that banks play in the global financial system. Proceeds from financial crime – for example: money laundering, financial fraud – run into trillions of dollars. Many banks spend hundreds of millions of dollars every year on fighting financial crime, and historical deficiencies have cost the industry tens of billions in regulatory fines.

Given the nature of the task, data and analytics play a key role in this. In this episode of the ‘Trustworthy AI’ podcast, we spoke to John Cusack, Chair of the Global Coalition to Fight Financial Crime, on this important topic. John has previously played the role of Global Head of Financial Crime Compliance at two global banks (UBS and Standard Chartered) and has been a two-time Co-Chair of the Wolfsberg Industry Group for the management of financial crime risks. 

In our conversation with John, we first explored the world of Money Laundering, Terrorist Financing and other types of financial crime, and sought to understand how financial institutions use data and analytics  to prevent and detect such crime, We discussed the importance of Trustworthy AI in this regulated part of the industry and explored why building models to mitigate financial crime may just be one of the coolest jobs in data science. 

Here are some interesting excerpts:

On Financial Crime Compliance

The term “Financial Crime” has traditionally been used to refer to money laundering from organised crime, including drug and arms trafficking. In the last 12 years, the definition has expanded to cover the financial proceeds from a much broader range of crimes, including illegal wildlife trade/ logging/ mining/ fishing, human trafficking, goods piracy, financial fraud, corruption and tax evasion. Collectively, the proceeds from Financial Crime are estimated to account for 2-5% of global GDP, not including tax evasion. 

On the role of banks in fighting financial crime

Banks play a critical role in helping law enforcement agencies by preventing, detecting and reporting financial crime. Essentially, that consists of two things: (1) preventing people from opening bank accounts to commit financial crime, and, (2) with existing customers and their counterparts, detecting and then reporting any suspected instances of financial crime to law enforcement. 

On the role of good data and analytics 

Banks’ ability to “know your customer” (KYC) is at the heart of fighting financial crime – knowing who they are, what is their background, where does their economic wealth come from, what is the expected usage of their accounts (based on their declarations and on past financial activity), whether they are engaging in unusual transactions, whether there are adverse mentions on them in the media, whether they are a prominent public figure or someone related to one, whether there are national or international sanctions against them. All these concepts have been around for 20 years, but the tools and technologies that enable us to execute on them more successfully, quickly and effectively are getting better year on year.

However, data and technology on their own are not enough, and can only make a meaningful impact if the people and process pillars are also robust. We need good frontline and specialist staff who can identify potential unusual suspicious activity, and well designed and executed processes for transaction monitoring, KYC, name screening, sanctions screening, adverse media screening and investigations.

On the role of Artificial Intelligence/ Machine Learning (AI/ML) 

I acquired my first basic AI machine in 2002. And it didn’t work very well. The idea that we ought to be moving away from simple algorithms to something more complex and productive has been seductive for a long time. However, until the last few years, we had not seen that successfully deployed in the financial crime space. Now, we are seeing successful deployment of AI in doing things better, quicker – whereby algorithms attempt to execute the activity of human investigators of financial crime alerts more efficiently and effectively. However, the idea that AI is already capable of identifying new forms of financial crime – without expert human inputs – is still a bit of a stretch. Of course AI can be extremely helpful to sort and connect data once it knows what to look for or what kinds of things should be of interest.

A lot of the financial crime alerts generated today – whether using traditional technology or newer AI-based solutions – are found to be ‘false positives’, once we investigate them. Machines are pretty good at generating the kinds of alerts we want to see, but not yet that good at zeroing in on the subset that represent genuine suspicious activity. Some vendors, though, are making big improvements here.

One key determinant of the success of AI/ML in financial crime is the availability of a feedback loop – between the reporting of suspicious activity and the confirmation that the reported activity was indeed a true instance of financial crime. In areas where accurate and timely feedback is available – such as fraud – this allows banks to deploy AI effectively. In other areas such as anti-money laundering, where law enforcement departments may not share their final verdict on the reports made to them by banks, this lack of ‘ground truth’ impacts the ability to train AI/ML models effectively.

On how AI/ML can become trustworthy in this regulated part of the industry

If you can persuade (stakeholders) about the effectiveness of your AI/ML solution by comparing “before and after”, and it’s compelling, then that should be good enough. You still have to be able to explain all the elements of the system, but we shouldn’t be focusing all the time on how to do stuff, we should be focusing on what we want to achieve. The conversation we should be having on AI is not “this is what my machine does”, but “ what can it do for me”. 

As an industry, we have got a lot of historical data about the performance of our financial crime control processes over many years. So benchmarking the “before and after”, parallel running or testing can demonstrate whether or not things are improving – quicker, smarter, faster or better in any other meaningful way. Once this is done, the next conversation is around how to ensure it is solid and does not fall over; that it can be understood, and that it can be operated by a highly regulated institution. 

On why working in financial crime can be rewarding for data professionals

Working in a regulated institution need not be a bad experience. It creates a framework for being able to do wonderful things, with some incredibly smart people, fantastic resources and technology and lots of data. This is where the action is – a fantastically interesting ecosystem where you make a real difference with your work. And because of that, they’re regulated. That’s good!

October 12th was Ada Lovelace day, (the week we discussed these matters) which is a day that seeks to represent all the women involved in science, technology, engineering, and maths. Ada is considered to be the first computer programmer in the world. But what’s really interesting about her is that she wasn’t just a mathematician, which talent she acquired from her accomplished mother, but she was also the daughter of Lord Byron, a very famous English poet, and she brought some of her poetry and imagination to her science. 

I love that idea of our data scientists, also being poets and having imagination. The great thing of working in the financial crime space is it’s not just about numbers, it’s also about people, about things, and about imagining what’s going on. Every day, there’s something new to think about and to enjoy. So come on board, enjoy it. It’s a wonderful journey.

Last modified on November 8th, 2023