10 min read
Freddie Mac AI Mandate Compliance Checklist: What Mortgage Companies Must Do Now
Freddie Mac AI Mandate Compliance Checklist: What Mortgage Companies Must Do Now Freddie Mac Bulletin 2025-16, issued December 3, 2025, rewrote...
Information Security Compliance
Add security and compliance to Microsoft 365
BI Reporting Dashboards
Realtime pipeline insights to grow and refine your learning operation
Integrations for Banks & Credit Unions
Connect LOS, core platforms, and servicing system
Productivity Applications
Deploy customized desktop layouts for maximum efficiency
Server Hosting in Microsoft Azure
Protect your client and company data with BankGrade Security
8 min read
Justin Kirsch : Mar 3, 2026 10:43:12 PM
On March 2, 2026, the Federal Housing Finance Agency terminated all use of Anthropic AI products across its operations, including at Fannie Mae and Freddie Mac. FHFA Director William Pulte confirmed that the mortgage GSEs would cease using Anthropic's Claude platform immediately. The action followed President Trump's directive ordering federal agencies to cut ties with Anthropic after the company refused to remove safety restrictions on its AI for Pentagon use.
For mortgage lenders and servicers, this is not just a Washington policy story. FHFA regulates the two entities that define your secondary market access. When the agency that oversees Fannie Mae and Freddie Mac decides an AI vendor is unacceptable for its own operations, every mortgage company should be asking: what does this mean for my AI vendor relationships?
The termination started with a dispute between Anthropic and the Pentagon. The Department of Defense wanted to use Anthropic's Claude AI for all lawful purposes, including defense and intelligence operations. Anthropic drew two lines: Claude would not be used for autonomous weapons systems and would not be used for mass surveillance of American citizens. CEO Dario Amodei stated publicly that threats would not change their position.
Defense Secretary Pete Hegseth responded by designating Anthropic as a supply-chain risk to national security. President Trump then ordered all federal agencies to terminate Anthropic contracts. Within days, Treasury, FHFA, HHS, and the State Department began shedding their Anthropic relationships. Multiple agencies announced they would transition to OpenAI as an alternative.
FHFA's action carries specific weight for the mortgage industry. Unlike Treasury or HHS, FHFA directly regulates Fannie Mae and Freddie Mac. When Director Pulte extended the termination to include both GSEs, he sent a signal that reached every mortgage company in the country: the regulator of your secondary market access considers this AI vendor a risk.
The government has a six-month runway to complete the phase-out. Anthropic has stated it will challenge the supply-chain risk designation in court.
This is happening at the same time Freddie Mac Bulletin 2025-16 mandates AI governance for all seller/servicers (effective March 3, 2026). The same regulatory ecosystem that now requires you to govern your AI is simultaneously demonstrating that AI vendor relationships can be disrupted overnight by forces entirely outside your control. If your compliance, operations, or technology stack depends on a single AI vendor, the FHFA-Anthropic episode is a warning you cannot ignore. See our complete breakdown: Freddie Mac AI Mandate Compliance Checklist.
The FHFA-Anthropic termination reveals a risk that most mortgage companies have not accounted for: AI vendor disruption driven by political, regulatory, or geopolitical forces rather than technology failure.
Traditional vendor risk management focuses on operational stability, data security, and financial viability. Will the vendor stay in business? Will they protect your data? Can they meet uptime requirements? Those questions still matter. But the Anthropic episode adds a new dimension: can your AI vendor's relationship with the federal government change your operational landscape overnight?
FHFA regulates the GSEs. The GSEs set selling and servicing requirements for mortgage companies. When FHFA signals concern about an AI vendor, the message flows downstream. Today, Freddie Mac and Fannie Mae are terminating their own use of Anthropic. Tomorrow, examiner questions about your AI vendor choices could follow.
Consider the chain of events. Anthropic refused to relax AI safety restrictions. The Pentagon labeled them a supply-chain risk. The President ordered agencies to cut ties. FHFA extended that to the GSEs. If your LOS vendor, document AI provider, or servicing platform uses Anthropic's models under the hood, you now have a vendor-within-a-vendor risk that sits squarely within your AI risk management framework.
Most mortgage companies use AI through their existing technology vendors without fully mapping where that AI comes from, how it works, or what would happen if it disappeared.
Here is where AI is likely embedded in your mortgage technology stack right now:
The question is not whether you use vendor AI. You almost certainly do. The question is whether you know which AI models power each of these functions, who provides them, and what your contingency plan is if one of those vendor relationships changes.
According to the National Mortgage News Emerging Tech and AI Survey (2025), 73% of nonbank lenders were quickly expanding their AI offerings, and 68% of bank lenders were doing the same. That expansion is largely happening through vendor partnerships, not internal development.
If your mortgage company does not have a formal AI vendor risk assessment process, build one now. The FHFA-Anthropic situation demonstrates that standard vendor due diligence is not sufficient for AI. You need to ask questions specific to AI risk. The broader pattern of automation risks in mortgage operations is explored in our analysis of the hidden risks in mortgage automation.
These questions align with the Interagency Guidance on Third-Party Relationships (OCC Bulletin 2023-17, FDIC FIL-2023-29), which establishes principles for managing third-party vendor risk. The guidance applies to AI vendors as much as it applies to any other technology relationship.
"As organizations deepen partnerships with major cloud and AI providers, regulators and executives are increasingly focused on concentration risk, the concern that reliance on a relatively small number of technology providers might create critical business vulnerabilities."
Microsoft Industry Blog, February 2026Your vendor contracts may need updating. Standard technology service agreements often do not address AI-specific risks. Based on regulatory guidance and the lessons of the FHFA-Anthropic termination, here are the provisions mortgage companies should include in AI vendor contracts.
Risk assessment and contract provisions are defensive measures. Building genuine resilience requires a broader strategy.
If your entire document processing pipeline depends on one AI vendor, a disruption to that vendor disrupts your operations. Where practical, evaluate alternative vendors for critical AI functions. Even if you do not switch today, knowing your options and having evaluated alternatives puts you in a stronger position if a change is forced.
The Financial Stability Board has flagged AI vendor concentration as a systemic risk for financial services. Black Kite's 2026 Third-Party Breach Report found an average of 5.28 downstream victims per third-party breach, the highest level recorded, indicating how vendor disruptions cascade through interconnected systems.
The Anthropic episode illustrates that your vendor's AI provider can become your problem. Ask your LOS provider, document AI vendor, and QC platform whether they use Anthropic, OpenAI, Google, or other foundation models. Map those upstream dependencies so you understand your full exposure.
You do not need to build your own AI models. But you do need staff who can evaluate AI vendor claims, test AI outputs, and make informed decisions about AI risk. Invest in AI literacy for your compliance, technology, and operations teams.
Annual vendor reviews are not sufficient for AI vendors. AI technology changes faster than traditional software. Schedule quarterly reviews for vendors whose AI touches lending decisions, borrower data, or compliance-sensitive functions.
ABT works with hundreds of mortgage companies to manage their technology vendor ecosystem, including the due diligence and ongoing monitoring that AI vendor relationships now require. As the largest Tier-1 Microsoft Cloud Solution Provider primarily dedicated to financial services, ABT helps mortgage lenders build vendor risk management strategies that protect operations and maintain GSE compliance.
For more on protecting your technology environment, see our Encompass Cloud Hosting Configuration Guide and OWASP Top 10 for Agentic AI in Financial Institutions.
ABT helps mortgage companies inventory AI vendor dependencies, assess concentration risk, and build resilient technology strategies that maintain GSE compliance even when the vendor landscape shifts.
Talk to an ExpertFHFA terminated its Anthropic contract in March 2026 following President Trump's directive ordering federal agencies to stop using Anthropic technology. The directive came after Anthropic refused to remove AI safety restrictions for Pentagon use, and Defense Secretary Hegseth designated Anthropic as a supply-chain risk. FHFA extended the termination to include Fannie Mae and Freddie Mac.
The federal ban applies to government agencies and contractors, not directly to private mortgage companies. However, FHFA regulates Fannie Mae and Freddie Mac, which set requirements for seller/servicers. Mortgage companies should monitor whether GSE guidance extends vendor restrictions to their seller/servicer requirements, and should assess their own AI vendor dependencies regardless.
Mortgage lenders should assess upstream model provider risk, vendor concentration across critical functions, data handling and privacy practices, model transparency and auditability, business continuity if the vendor loses AI capabilities, and regulatory compliance alignment including GSE requirements under Freddie Mac Bulletin 2025-16.
Evaluate AI vendors using a framework that addresses model provenance, training data practices, data handling and residency, performance validation for mortgage use cases, upstream provider dependencies, exit and portability provisions, and alignment with OCC Bulletin 2023-17 and FFIEC third-party risk management guidance. Schedule quarterly reviews for AI vendors touching lending decisions.
Key contract provisions include AI model transparency and upstream provider disclosure, audit rights for model performance and bias testing, advance notification of model changes, data portability and exit provisions, prohibition on using borrower data for model training without consent, and requirements for GSE compliance support under Freddie Mac Section 1302.8.
Freddie Mac Bulletin 2025-16, effective March 3, 2026, requires seller/servicers to govern all AI used in origination and servicing, including vendor-embedded AI. This means vendor AI risk assessment is not a separate initiative but a required component of your Freddie Mac compliance program. The FHFA-Anthropic termination adds urgency to building vendor resilience within your governance framework.
CEO, Access Business Technologies
Justin Kirsch has managed technology vendor relationships for mortgage companies since 1999, navigating through multiple waves of vendor consolidation and technology disruption. As CEO of Access Business Technologies, he helps mortgage lenders build vendor risk management strategies that protect borrower data and maintain GSE compliance.
10 min read
Freddie Mac AI Mandate Compliance Checklist: What Mortgage Companies Must Do Now Freddie Mac Bulletin 2025-16, issued December 3, 2025, rewrote...
Deepfake-driven attacks against financial services increased 180% year-over-year through 2025, according to TELUS Digital. Deloitte projects that...