Mortgage Workspace Blog

FHFA Drops Anthropic: What AI Vendor Risk Means for Mortgage Lenders

Written by Justin Kirsch | Mar 4, 2026 6:43:12 AM

FHFA Drops Anthropic: What AI Vendor Risk Means for Mortgage Lenders

On March 2, 2026, the Federal Housing Finance Agency terminated all use of Anthropic AI products across its operations, including at Fannie Mae and Freddie Mac. FHFA Director William Pulte confirmed that the mortgage GSEs would cease using Anthropic's Claude platform immediately. The action followed President Trump's directive ordering federal agencies to cut ties with Anthropic after the company refused to remove safety restrictions on its AI for Pentagon use.

For mortgage lenders and servicers, this is not just a Washington policy story. FHFA regulates the two entities that define your secondary market access. When the agency that oversees Fannie Mae and Freddie Mac decides an AI vendor is unacceptable for its own operations, every mortgage company should be asking: what does this mean for my AI vendor relationships?

March 2, 2026
FHFA, Treasury, and HHS terminated Anthropic AI contracts following executive directive, with Fannie Mae and Freddie Mac also ceasing use
Source: Nextgov, CNN Business, March 2026

What Happened: FHFA Terminates Anthropic AI Contract

The termination started with a dispute between Anthropic and the Pentagon. The Department of Defense wanted to use Anthropic's Claude AI for all lawful purposes, including defense and intelligence operations. Anthropic drew two lines: Claude would not be used for autonomous weapons systems and would not be used for mass surveillance of American citizens. CEO Dario Amodei stated publicly that threats would not change their position.

Defense Secretary Pete Hegseth responded by designating Anthropic as a supply-chain risk to national security. President Trump then ordered all federal agencies to terminate Anthropic contracts. Within days, Treasury, FHFA, HHS, and the State Department began shedding their Anthropic relationships. Multiple agencies announced they would transition to OpenAI as an alternative.

FHFA's action carries specific weight for the mortgage industry. Unlike Treasury or HHS, FHFA directly regulates Fannie Mae and Freddie Mac. When Director Pulte extended the termination to include both GSEs, he sent a signal that reached every mortgage company in the country: the regulator of your secondary market access considers this AI vendor a risk.

The government has a six-month runway to complete the phase-out. Anthropic has stated it will challenge the supply-chain risk designation in court.

Why This Matters Right Now

This is happening at the same time Freddie Mac Bulletin 2025-16 mandates AI governance for all seller/servicers (effective March 3, 2026). The same regulatory ecosystem that now requires you to govern your AI is simultaneously demonstrating that AI vendor relationships can be disrupted overnight by forces entirely outside your control. If your compliance, operations, or technology stack depends on a single AI vendor, the FHFA-Anthropic episode is a warning you cannot ignore. See our complete breakdown: Freddie Mac AI Mandate Compliance Checklist.

Why Mortgage Lenders Should Pay Attention

The FHFA-Anthropic termination reveals a risk that most mortgage companies have not accounted for: AI vendor disruption driven by political, regulatory, or geopolitical forces rather than technology failure.

Traditional vendor risk management focuses on operational stability, data security, and financial viability. Will the vendor stay in business? Will they protect your data? Can they meet uptime requirements? Those questions still matter. But the Anthropic episode adds a new dimension: can your AI vendor's relationship with the federal government change your operational landscape overnight?

FHFA regulates the GSEs. The GSEs set selling and servicing requirements for mortgage companies. When FHFA signals concern about an AI vendor, the message flows downstream. Today, Freddie Mac and Fannie Mae are terminating their own use of Anthropic. Tomorrow, examiner questions about your AI vendor choices could follow.

Consider the chain of events. Anthropic refused to relax AI safety restrictions. The Pentagon labeled them a supply-chain risk. The President ordered agencies to cut ties. FHFA extended that to the GSEs. If your LOS vendor, document AI provider, or servicing platform uses Anthropic's models under the hood, you now have a vendor-within-a-vendor risk that sits squarely within your AI risk management framework.

63%
of mortgage lenders use AI for document classification and indexing, with the majority running this AI through vendor platforms
Source: National Mortgage News Emerging Tech and AI Survey, 2025

The AI Vendor Risk You Are Already Carrying

Most mortgage companies use AI through their existing technology vendors without fully mapping where that AI comes from, how it works, or what would happen if it disappeared.

Here is where AI is likely embedded in your mortgage technology stack right now:

  • Loan Origination System (LOS): ICE Mortgage Technology's Encompass, Black Knight's Empower, and other platforms increasingly embed AI for document classification, data extraction, and automated condition generation
  • Document Processing: Vendors like Ocrolus, which now integrates directly with Encompass, use AI to classify over 1,600 financial document types and pre-populate income calculations
  • Automated Valuation Models (AVMs): Machine learning models trained on historical property data underpin property valuations used in origination and servicing
  • Fraud Detection: Pattern recognition models that flag suspicious applications, document anomalies, and identity verification concerns
  • Quality Control: AI-powered defect detection in post-closing QC workflows
  • Borrower Communication: Chatbots, automated email responders, and communication tools that may use generative AI models
  • Servicing Decisions: AI-assisted loss mitigation, payment modification, and collections decisioning

The question is not whether you use vendor AI. You almost certainly do. The question is whether you know which AI models power each of these functions, who provides them, and what your contingency plan is if one of those vendor relationships changes.

According to the National Mortgage News Emerging Tech and AI Survey (2025), 73% of nonbank lenders were quickly expanding their AI offerings, and 68% of bank lenders were doing the same. That expansion is largely happening through vendor partnerships, not internal development.

Where AI is embedded in a typical mortgage company's technology stack. Most of this AI runs inside vendor platforms.

Third-Party AI Risk Assessment: A Practical Framework

If your mortgage company does not have a formal AI vendor risk assessment process, build one now. The FHFA-Anthropic situation demonstrates that standard vendor due diligence is not sufficient for AI. You need to ask questions specific to AI risk. The broader pattern of automation risks in mortgage operations is explored in our analysis of the hidden risks in mortgage automation.

Key Questions for Every AI Vendor

  • Where does the vendor's AI model come from? Does your vendor build its own models, license them from a foundation model provider (OpenAI, Anthropic, Google, Meta), or use open-source models? If they license from a third party, your vendor risk assessment must account for the upstream provider.
  • How is the model trained? What data was used to train the model? Does the model train on your data? How is model performance validated for mortgage-specific use cases?
  • What data does the AI access? Does the model process borrower PII, credit data, or financial documents? Where is that data stored and processed? Does data leave your environment?
  • What happens if the vendor loses its AI capabilities? If the vendor's upstream AI provider is disrupted (as happened with Anthropic), does your vendor have a contingency plan? Can they switch to an alternative model without disrupting your operations?
  • What is the exit strategy? Can you move to a different vendor without losing data, re-training models, or rebuilding integrations? What is the realistic timeline and cost for a vendor transition?

These questions align with the Interagency Guidance on Third-Party Relationships (OCC Bulletin 2023-17, FDIC FIL-2023-29), which establishes principles for managing third-party vendor risk. The guidance applies to AI vendors as much as it applies to any other technology relationship.

"As organizations deepen partnerships with major cloud and AI providers, regulators and executives are increasingly focused on concentration risk, the concern that reliance on a relatively small number of technology providers might create critical business vulnerabilities."

Microsoft Industry Blog, February 2026

Contract Provisions Every Mortgage Company Needs for AI Vendors

Your vendor contracts may need updating. Standard technology service agreements often do not address AI-specific risks. Based on regulatory guidance and the lessons of the FHFA-Anthropic termination, here are the provisions mortgage companies should include in AI vendor contracts.

AI Model Transparency

  • Require vendors to disclose which AI models their products use and identify any upstream model providers
  • Require notification when the vendor changes the underlying AI model, training data, or model architecture
  • Specify that the vendor must disclose any AI components added to existing products, not just purpose-built AI features

Audit and Testing Rights

  • Retain the right to audit AI model performance, bias testing results, and validation documentation
  • Require the vendor to provide model performance data relevant to mortgage-specific use cases on a scheduled basis
  • Include the right to conduct independent testing of AI outputs for fair lending and accuracy

Data Handling Requirements

  • Specify that borrower data processed by vendor AI must not be used for model training without explicit consent
  • Define data residency requirements for AI processing
  • Require data portability provisions so your data can be extracted if the vendor relationship ends

Change Notification and Contingency

  • Require advance notification of any material changes to AI functionality, model providers, or data handling practices
  • Define what constitutes a material change that triggers notification
  • Require the vendor to maintain a documented contingency plan if their upstream AI provider becomes unavailable

GSE Compliance Obligations

  • Require the vendor to support your compliance with Freddie Mac Section 1302.8 AI governance requirements
  • Require the vendor to cooperate with GSE examination requests related to AI use
  • Include termination provisions if the vendor cannot demonstrate compliance with applicable regulatory requirements
Eight contract provisions every mortgage company needs for AI vendors.

Building AI Vendor Resilience

Risk assessment and contract provisions are defensive measures. Building genuine resilience requires a broader strategy.

Avoid Single-Vendor AI Dependency

If your entire document processing pipeline depends on one AI vendor, a disruption to that vendor disrupts your operations. Where practical, evaluate alternative vendors for critical AI functions. Even if you do not switch today, knowing your options and having evaluated alternatives puts you in a stronger position if a change is forced.

The Financial Stability Board has flagged AI vendor concentration as a systemic risk for financial services. Black Kite's 2026 Third-Party Breach Report found an average of 5.28 downstream victims per third-party breach, the highest level recorded, indicating how vendor disruptions cascade through interconnected systems.

Understand Your Vendor's Vendor

The Anthropic episode illustrates that your vendor's AI provider can become your problem. Ask your LOS provider, document AI vendor, and QC platform whether they use Anthropic, OpenAI, Google, or other foundation models. Map those upstream dependencies so you understand your full exposure.

Build Internal AI Competency

You do not need to build your own AI models. But you do need staff who can evaluate AI vendor claims, test AI outputs, and make informed decisions about AI risk. Invest in AI literacy for your compliance, technology, and operations teams.

Regular Vendor Reviews

Annual vendor reviews are not sufficient for AI vendors. AI technology changes faster than traditional software. Schedule quarterly reviews for vendors whose AI touches lending decisions, borrower data, or compliance-sensitive functions.

ABT works with hundreds of mortgage companies to manage their technology vendor ecosystem, including the due diligence and ongoing monitoring that AI vendor relationships now require. As the largest Tier-1 Microsoft Cloud Solution Provider primarily dedicated to financial services, ABT helps mortgage lenders build vendor risk management strategies that protect operations and maintain GSE compliance.

For more on protecting your technology environment, see our Encompass Cloud Hosting Configuration Guide and OWASP Top 10 for Agentic AI in Financial Institutions.

Concerned About Your AI Vendor Exposure?

ABT helps mortgage companies inventory AI vendor dependencies, assess concentration risk, and build resilient technology strategies that maintain GSE compliance even when the vendor landscape shifts.

Talk to an Expert

Frequently Asked Questions

FHFA terminated its Anthropic contract in March 2026 following President Trump's directive ordering federal agencies to stop using Anthropic technology. The directive came after Anthropic refused to remove AI safety restrictions for Pentagon use, and Defense Secretary Hegseth designated Anthropic as a supply-chain risk. FHFA extended the termination to include Fannie Mae and Freddie Mac.

The federal ban applies to government agencies and contractors, not directly to private mortgage companies. However, FHFA regulates Fannie Mae and Freddie Mac, which set requirements for seller/servicers. Mortgage companies should monitor whether GSE guidance extends vendor restrictions to their seller/servicer requirements, and should assess their own AI vendor dependencies regardless.

Mortgage lenders should assess upstream model provider risk, vendor concentration across critical functions, data handling and privacy practices, model transparency and auditability, business continuity if the vendor loses AI capabilities, and regulatory compliance alignment including GSE requirements under Freddie Mac Bulletin 2025-16.

Evaluate AI vendors using a framework that addresses model provenance, training data practices, data handling and residency, performance validation for mortgage use cases, upstream provider dependencies, exit and portability provisions, and alignment with OCC Bulletin 2023-17 and FFIEC third-party risk management guidance. Schedule quarterly reviews for AI vendors touching lending decisions.

Key contract provisions include AI model transparency and upstream provider disclosure, audit rights for model performance and bias testing, advance notification of model changes, data portability and exit provisions, prohibition on using borrower data for model training without consent, and requirements for GSE compliance support under Freddie Mac Section 1302.8.

Freddie Mac Bulletin 2025-16, effective March 3, 2026, requires seller/servicers to govern all AI used in origination and servicing, including vendor-embedded AI. This means vendor AI risk assessment is not a separate initiative but a required component of your Freddie Mac compliance program. The FHFA-Anthropic termination adds urgency to building vendor resilience within your governance framework.

Justin Kirsch

CEO, Access Business Technologies

Justin Kirsch has managed technology vendor relationships for mortgage companies since 1999, navigating through multiple waves of vendor consolidation and technology disruption. As CEO of Access Business Technologies, he helps mortgage lenders build vendor risk management strategies that protect borrower data and maintain GSE compliance.