Bates Research | 06-26-25
Adapting TPRM to Manage AI and Fourth-Party Risk

Financial institutions and Fintechs (institution/institutions) have expanded their use of vendors and third-party relationships over the past five years, as Banking-as-a-Service (BaaS) has blossomed and the shift toward outsourcing has progressed. This evolution has caused examiners to focus more on third-party risk management (TPRM) in their guidance and to perform deeper dives during exams. The concept of fourth-party risk management has also gained traction, and we’ve occasionally heard the term “fifth-party risk management” at conferences (used to encourage institutional leaders to consider how many layers of subcontracting and outsourcing might be involved when working with a vendor.)
Starting in 2023, the TPRM dialogue began to shift toward the use of artificial intelligence by vendors.* The result is a new TPRM landscape that financial institutions and Fintech leaders must adapt to.
*For simplicity in this article, we use “vendor” to refer to both vendors and third parties.
Role of the Financial Institution
We will address two questions that we frequently hear about this topic:
- What should be the financial institution’s role when considering fourth-party relationships?
- What should be the financial institution’s role when considering the use of AI by a vendor (or further in the chain) to deliver service to the institution’s customers?
When we respond to these questions, we start by asking about the level of awareness the institution has about their vendor and then ask about the protocol they use to address the risk posed by the new information provided by the vendor. In short, we call this “awareness and active risk management.”
Awareness
Having a vendor complete an annual questionnaire (including questions about the use of fourth parties or AI) doesn’t provide timely enough information. A once-a-year review could leave the institution in the dark for up to 11 months. Similarly, relying solely on vendor agreement language that requires immediate notification of AI or fourth-party use is a passive approach (although this language should still be included). A better practice is to pose these questions at least quarterly, or monthly during standing meetings with key vendors.
Ideally, institution leaders should not only be aware of a vendor’s use of AI but also play an active role in its development and deployment, especially if it impacts customer services. The same holds true for significant fourth-party involvement. Participating in vetting and implementation processes should be the gold standard and can help demonstrate meaningful oversight.
Failing to maintain awareness can lead to serious operational and managerial surprises—risks that institutions cannot afford.
Active Risk Management
Focusing specifically on AI use by vendors, there are four key risk areas institutions must actively mitigate:
- Explainability: If neither the vendor nor the institution can explain how the AI model works, this impacts model validation, especially if the model is used for AML/Fraud/OFAC, lending (e.g., valuations, underwriting), or other critical applications. A lack of clarity can result in findings across vendor management, IT, fair lending, UDAAP, and AML/Fraud/OFAC domains.
- Data Privacy: Even if data use complies with GLBA, it may not permit sharing with public AI models. Institutions must understand how customer data is used and ensure privacy protections are in place.
- Bias/Discrimination: AI tools have already drawn media attention due to alleged age discrimination. Bias can originate in model training and may only become apparent after significant customer data is processed—potentially too late to prevent harm.
- Accuracy: An AI tool may be explainable, unbiased, and privacy-compliant but still produce inaccurate results. “Accurate” refers to outcomes that are complete, correct, and timely. Imagine a flawed appraisal system influencing lending decisions, or a fraud detection tool that misses key indicators. Institutions must understand both the pre-implementation testing and ongoing evaluation performed by vendors.
Some consumer laws such as ECOA/Regulation B and Privacy statutes carry a private right of action, opening the door to lawsuits or even a class action lawsuit if AI was implemented by the vendor incorrectly, such that a violation of law occurs.
Why This Matters
These risks mirror those involved in internally developed AI, but with a critical twist when AI is deployed by a vendor: institutions are still accountable for vendor performance. Examiners and auditors will not accept “I’m not sure; let me put you in touch with the vendor.” They will hold the institution responsible for explaining how the vendor has deployed AI, and what the impact is on risks such as explainability, data privacy, bias/discrimination, and even accuracy, because it’s the institution that’s accountable for its use.
A Path Forward
The above risks can be mitigated by active risk management over the vendor, which can include:
- Requiring continuous AI monitoring by the vendor, with results provided to the institution
- Demanding full transparency about when AI is considered, implemented, and evaluated
- Incorporating AI-related questions into regular vendor meetings
- Leveraging internal second-line or third-line functions to independently test vendor AI tools
TPRM has evolved quickly in just the last few years. Today, institutions must ensure awareness and active management of both fourth-party risks and vendor-deployed AI, especially when these vendors impact customer-facing services. With the right communication, contractual terms, oversight, and testing protocols, these risks can be effectively integrated into an institution’s broader risk management strategy.
How Bates Group Helps
Bates Group offers ongoing advisory services to a wide range of financial institutions and Fintechs. We offer Independent Reviews and Risk Assessments, Compliance Program Support, and Custom Compliance Training. Contact us today to get your solution started.
