Bates Research | 01-19-24
Expert Insights: The Compliance Risks of Artificial Intelligence
By Brandi Reynolds, CAMS-Audit, Managing Director, BSA/AML Compliance, FinTech & Virtual Assets
While reviewing the State of Financial Crimes 2024 report published by ComplyAdvantage, the survey findings pertaining to Artificial Intelligence-based compliance solutions stood out to me as a compliance professional that uses this technology.
Artificial Intelligence (AI) refers to the use of technologies to build machines and computers that have the ability to mimic human intelligence, such as being able to analyze data and make recommendations. This set of technologies can be integrated into a system to enable it to reason, learn and act. Machine learning is a subset of AI that uses algorithms on data to produce models that can perform those tasks.
The use of AI and machine learning is becoming increasingly widespread in today's digital environment as it can help businesses "see the unseen.” Financial institutions of all types are encountering the ever-changing nature of financial crimes as well as an increase of transactional data. As such, regulators and financial institutions must continuously improve their fraud and anti-money laundering strategies and capabilities.
AI and machine learning can assist in analyzing tremendous volumes of data and can be utilized for a variety of compliance-related obligations including:
- Customer identification and verification
- Transaction monitoring
- Risk-based monitoring
The Financial Action Task Force (FATF) has commented on the use of AI and its advanced computational techniques to “perform tasks that typically require human intelligence, such as recognizing patterns, making predictions, recommendations, or decisions.” As explained, machine learning can be used to train computer systems to “learn from data,” without the need for extensive human intervention.
Many regulators globally have commented on the use of AI. Two notable published remarks come from the Financial Crimes Enforcement Network (FinCEN) and the New York Department of Financial Services (DFS).
In the guidance issued by the DFS, it stated that “Blockchain analytics tools provide companies with an efficient, data-driven way to conduct customer due diligence, transaction monitoring, and sanctions screening, among other things, which are all critical elements of our virtual currency regulation. We expect regulated entities to utilize best practices to uphold the safety and soundness of the virtual currency market and to protect consumers.”
FinCEN in a joint statement indicated, “New technology, such as artificial intelligence and machine learning, can provide better strategies for banks of all sizes to better manage money-laundering and terrorist-financing risks, while reducing the cost of compliance.”
While AI can provide many benefits, it also brings with it potential risks that organizations must consider when incorporating AI into their processes. ComplyAdvantage’s survey indicated that 66% of the respondents believe that AI poses a growing cybersecurity threat. Additional risks and challenges in using AI for BSA/AML compliance include:
- Data Quality – Inaccurate or unrepresentative data can lead to false positives or false negatives, potentially undermining the effectiveness of BSA/AML compliance efforts relying on this data.
- Regulatory Concerns – AI systems must meet legal requirements, including explainability, auditability, and compliance with data protection, anti-discrimination, and consumer protection laws.
- Interpretability – AI models often make it challenging to understand how they arrive at their decisions.
- Adversarial Attacks – malicious actors may attempt to manipulate or deceive the system’s decision-making process.
To address these risks, institutions must ensure the quality and integrity of the data used to train and test AI models before operationalizing them. Those processes should include regular data validation and monitoring for bias. They must also invest in techniques and technologies that enhance the interpretability of AI models, enabling regulators to understand and validate the rationale behind a system’s outputs. This includes model explainability algorithms and rule-based systems. Institutions should also implement security measures to protect AI models from attacks that could compromise the integrity of the compliance processes.
It is important for compliance professionals to understand the functions of AI systems and keep a check on their performance to identify and address any biases, vulnerabilities, and emerging risks. However, according to ComplyAdvantage’s findings, 50% of the respondents were “concerned about explaining the decisions and outcomes of AI-based financial crime solutions to various stakeholders and regulators.” Despite this, 89% of the respondents felt somewhat comfortable compromising explainability for greater automation and efficiency.
Additionally, when considering the type and the extent of how to incorporate AI into an AML compliance program, financial institutions should assess their existing policies and procedures or create new policies and procedures that incorporate the use of the new technology.
For many financial institutions and compliance professionals, the risk is outweighed by the reward. However, it is critical to continuously address the risks associated with data quality, bias, regulatory compliance and adversarial attacks. While AI can add efficiency and recognize patterns and digest large data sets, clearly the human element is needed to monitor the AI systems to ensure integrity and fairness in an institution’s compliance efforts to prevent financial crime.
About the Author:
Brandi Reynolds
Chief Growth Officer and Senior Managing Director, Fintech & Banking Compliance