Contact Bates Today

Bates Group is with you every step of the way. Contact us today for more information on how our End-to-End Solutions can help your firm.

Get My Solution Started

Bates Group Logo

We’re looking for talent! Interested in a career at Bates Group? Visit our Careers page.

Bates Research  |  03-14-24

AI Inventory and Risk Assessment: What a Financial Institution Risk Officer Needs to Know

Image © [R Studio] /Adobe Stock

It’s 10 PM, do you know what your AI is doing?  This question is a play on a popular question that appeared on TV screens across the U.S. in the 1960s, 1970s, and 1980s at 10 or 11 at night, reminding parents to be aware of their children’s whereabouts.  The intent was to keep children safe.

We want to keep financial institutions safe, too.  Safe and sound, actually.  And while the use of Artificial Intelligence (AI) can benefit institutions in many ways, it can also create risk – risk that can impact safety and soundness.

For managers at financial institutions to understand AI risk, they first need to conduct a thorough inventory of what AI is being used, by whom, and for what purpose inside their firm.  This is an important step because many financial institutions haven’t yet crafted an AI strategic plan and roadmap and are using AI somewhat randomly.  The information gained from conducting an AI inventory project can be used as the first step of performing an overall AI risk assessment.

First Things First: Define and Inventory AI Usage

Before taking an inventory, define what is meant by “AI” so leadership can respond consistently and correctly to the inevitable questions from their teams.  Explain that the inventory needs to include all of the applications used at the institution that claim to incorporate AI, such as fraud identity systems, customer risk rating systems, “negative news” systems used in customer due diligence, and transaction monitoring systems.  Ask each system owner to contact the vendor for an explanation of how and in what manner AI is used in the system.  Beware of “AI Washing,” a term recently used by SEC Chair Gensler, to describe vendor’s misrepresentation and exaggeration of a system’s AI capabilities.

Explain that the inventory needs to also include more ad hoc use of AI, such as generative AI queries and chatbots.  While it’s not a certainty that such tools are being used, it’s increasingly likely that generative AI may be leveraged in areas where sales, marketing, communications, and learning/development materials are being created, in systems development where code is generated, in customer service/advisory roles where answers can be retrieved quickly from data inside and outside the institution, and even in regulatory compliance where regulations can be searched quickly.  Because leadership might not be aware of when Marketing, for example, is using generative AI tools to develop social media posts, consider sending a survey to each employee to inventory the ad hoc AI used across the institution.

Once Inventoried, Conduct an Initial AI Risk Assessment

Armed with the knowledge of who is using AI and in what manner, consider having a Risk Analyst meet with each department leader to conduct the risk assessment, noting that the first risk will likely be that the institution has “incomplete knowledge of ‘who, what, where, when, and how’ of AI use in the institution.”  There are also standard governance risks such as “AI is being deployed without an approved framework,” “The institution lacks AI policies and procedures,” “The institution lacks an AI approval process,” etc.  So, the Risk Analyst must rely on standard practices and conduct the AI risk assessment the same way as any other risk assessment, at least initially, until a body of knowledge is generated on this topic.  However, the Risk Analyst should have a working knowledge of how AI can be used so they can ask the right questions, identify risks, and understand controls.

The first AI risk assessment will likely be rudimentary, but a rudimentary risk assessment is better than none at all, especially from an examiner’s perspective.  The risk assessment might reveal that additional controls need to be implemented, which is common with newer initiatives.  What’s important to Risk Officers is how the institution responds to the risk assessment.

Log Issues and Define Corrective Action

Once the risk assessment is complete, note the areas where risk hasn’t been remediated to be consistent with the institution’s risk appetite, log any issues, and formulate corrective action plans.  Some issues might require a timelier management response than others.  For example, if the risk assessment reveals there are many staff using ad hoc generative AI with no governance at all, then communicating basic AI guidelines and parameters should be done sooner rather than later.  Similarly, if the risk assessment revealed there is no internal “owner” or sponsor of AI in the firm, then one should be appointed. 

The corrective action will likely include the creation of a number of controls, and since these will all be considered “self identified,” a regulatory examination should go smoother than if examiners revealed and the need for controls themselves.

Regardless of where an institution is in its AI journey, it’s best to take the time to conduct an AI inventory and risk assessment to identify and remediate risks in order to stay safe and sound.

How Bates Helps

The experts at Bates Group are knowledgeable about AI, AML/CFT/BSA, and regulatory compliance, and are prepared to assist with creating your AI inventory and performing an AI risk assessment.  To find out more, please contact Bates Group today.

AI Inventory and Risk Assessment: What a Financial Institution Risk Officer Needs to Know

Brandi Reynolds

Managing Director, BSA/AML Compliance, FinTech & Virtual Assets

breynolds@batesgroup.com

864.809.7718