Financial services organisations have since the financial crisis in 2008 had to respond to a stricter regulatory compliance regime. This has led to the introduction of technologies that automate processes such as Know Your Customer (KYC) screening, fraud detection auditing, and anti-money laundering processes. Cyber-security attacks have also led software developers to deploy advanced artificial intelligence security monitoring tools, but there remains a quandary about when and how often monitoring tools should be used. It’s therefore important to examine the trade-offs, and to consider how the new ways of compliance monitoring and threat detection should be deployed.
Today financial services companies are required by the Financial Conduct Authority (FCA) to comply with a number of international anti-money laundering laws, the UK Corporate Governance Code (the UK answer to Sarbanes Oxley), and they need to meet several other internal policies and standards. All of this is becoming more complex as the regulatory environment is constantly changing, and this makes a financial services organisation’s ability to comply more challenging than ever.
This complexity is compounded by the fact that anti-money laundering (AML) occurs within more than one national or regional jurisdiction. Ben Taylor, CEO at RainBird Technologies, explains: “On AML, legislation is global and disparate with no common law. Imagine an employee working for a bank based in NY, but he works out of London and he is trying to write some business in Monaco – what regime applies? Ultimately it needs to boil down to a binary decision, can he or can he not write this business.”
So the key challenge is to make the ‘KYC’ process more efficient to avoid false positives and false negatives by using artificial intelligence (AI) and machine learning to automate the data analysis. Yet Taylor warns that many of the tools available are just really smart big data analysis solutions that learn. Their purpose is to find insights that often remain hidden whenever traditional data analysis methods are employed. He says what they lack is the ability to use reasoning:
“Machine learning is capable of reviewing millions of historic KYC files and using this data to drive a probabilistic judgement of a new file being compliant. The trouble is, such methods are ‘Black Box’ and cannot provide a reasoning for the judgements reached. False positives are therefore common and results need to be validated by an expert, putting the expert at the end of each process.”
In his article for Compliance Week of 26th October 2016, Jose Tabuena supports the view that the old methods of analysis simply aren’t enough anymore: “The simple pattern-matching approaches of the recent past are highly susceptible to both false positives and false negatives. Advances in machine learning and affordability of large-scale computing resources enable more sophisticated anomaly detection. In order to accomplish this new approach to threat detection, compliance professionals will need greater knowledge of core operational processes to understand a potential compliance incident’s business context. In short, the human element remains critical.”
Taylor adds that there has been a shortage of powerful tools that can make human-like decisions to arrive at “complex KYC judgments based on multiple sets of compliance obligations, and numerous data sources – with the additional challenges presented by either missing or poor quality data.” He has found that using a decision tree is insufficient. In his view a cognitive reasoning engine, such as the one Rainbird employs, is a must. It enables financial institutions to make a system capable of making these decisions automatically and completely with a rationale.
With the spectre of cyber-attacks and privacy issues always being on the agenda there is also a need to monitor and control who can access sensitive data, while keeping an eye out for any threats. Compliance, including cyber-security compliance, is about ensuring a company complies with the Financial Conduct Authority’s regulations – as well as with the data protection laws of the UK (if that’s where the company is based or trades).
Threats don’t just come from outside of a financial firm; they can come from within – their own employees. It’s therefore vital to keep track of where any sensitive data is located and who’s accessing it at any given moment. External attacks by hackers need to be monitored too.
Therefore the best way to use AI to Know Your Customer, to detect fraud, to prevent cyber-security or internal security threats and money laundering, is to deploy a broad spectrum of different technologies in concert. Taylor concludes that machine learning approaches “can raise red flags, but until recently it was not possible to automate the interpretation of results and turn a risk factor into a decision that can be acted on automatically.” So to turn insight into action, financial services companies need to use algorithmic machine learning models to raise red flags with a layer of human-like cognitive reasoning. This kind of approach is the future of KYC, which needs to start today.
Suggested further reading:
Celent’s whitepaper, ‘Artificial Intelligence In KYC-AML Enabling The Next Level Of Operational Efficiency’, August 2016. A link to this paper is contained within the article.
Latest posts by Graham Jarvis (see all)
- Why Are Regulators Forcing The Digital Transformation of Payments? - June 26, 2017
- Cyber-Attacks: What Risk Does The Cyber-Security Skills Gap Pose? - May 22, 2017
- Dodd-Frank Reform: Why Would It Force the UK to Relax Regulation? - April 19, 2017