Yesterday I showed you where to build — the gaps between your core platforms where no vendor controls access. Today I'm showing you why that matters more than you think. Because the biggest threat to your AI strategy isn't the technology. It's the government.
Last July, Anthropic signed a $200 million contract with the Department of Defense — the first AI lab to integrate its models into classified military networks. Then came the Iran conflict. The Pentagon demanded unrestricted access to Claude for "all lawful purposes." Anthropic drew two lines: no fully autonomous weapons without human oversight, and no mass surveillance of Americans.
The Pentagon's response: Defense Secretary Pete Hegseth designated Anthropic a supply chain risk — the first time that label has ever been applied to an American company. Federal agencies now have six months to phase out Anthropic's technology. OpenAI entered negotiations within days.
If you build workflows around a specific AI vendor, you're subject to these decisions. One executive order. One regulatory designation. And the tool you've trained your team on, built your SOPs around, and embedded in your delivery model gets pulled out from under you — not because the technology failed, but because the politics shifted.
Next Friday, March 20 at 3pm Eastern, I'm going deep on Sequoia Capital's thesis that your practice is a $50–80 billion disruption target — and what to do about it. A full hour. Free. Register at theaiaccountant.ai/webinar.
The state-level regulatory mess facing accounting firms
The federal picture is volatile enough. The state level is worse.
Oregon's SB 1546 has passed both chambers and is headed to the governor's desk. It requires chatbot disclosure, mandates crisis response protocols for AI systems, and includes specific protections for minors. It takes effect January 2027. Oregon joins a growing list of states that aren't waiting for federal guidance.
Across the U.S., 78 AI-related bills are moving through 27 state legislatures. No unified framework. No consistent definitions. California alone estimates compliance costs at $16,000 annually for small businesses — and that's one state. A recent analysis pegs the average compliance overhead at 17% for firms operating across state lines.
Meanwhile, the federal government is going the other direction. The Trump administration's executive order on AI favors deregulation and explicitly pushes back on state-level restrictions. Utah became the test case — the federal government pressured the state to soften its AI disclosure requirements, signaling that preemption battles are coming.
For a CAS practice operating across state lines — or serving clients who do — the rules now change depending on which state you're in, which vendor you use, and which administration is setting the priorities.
Why tool-agnostic AI workflows are essential
This is where yesterday's argument becomes a risk management strategy, not just an efficiency play.
If your AI workflows are deeply embedded in a single vendor's ecosystem — their prompts, their integrations, and their proprietary agent frameworks — you're exposed. Not just to the vendor's product decisions, but to every government action that affects that vendor. A regulatory change could disable a capability you depend on. A procurement shift could change which tools receive enterprise support or market investment.
The firms building in the gaps — using AI for communication, document processing, analysis, advisory prep, and context engineering — have built something no government decision can reach. Those workflows don't depend on a specific vendor's API. They don't require a specific model. They run on structured knowledge about your clients, your processes, and your firm — knowledge that transfers across any tool.
That's not just operational flexibility. That's political risk insurance.
The global AI regulation picture isn't better
The EU AI Act hits its most significant compliance milestone on August 2, 2026 — when obligations for high-risk AI systems, transparency rules, and national sandbox requirements all take effect. Fines for non-compliance run up to 35 million euros or 7% of global turnover. And it has extraterritorial scope — if you serve EU clients or use AI tools that process EU data, you're covered regardless of where your firm sits.
Canada offers the opposite problem. No comprehensive federal AI legislation exists. The Artificial Intelligence and Data Act has stalled. Canadian firms have more flexibility today — but zero regulatory predictability. When legislation arrives, it'll come all at once.
The lesson: regulatory uncertainty isn't the absence of risk. It's the presence of unpriced risk. Whether your jurisdiction is moving fast, moving slow, or not moving at all, the firms that build portable, tool-agnostic AI capability are the ones who'll adapt without rebuilding.
The leadership decision
This isn't a technology problem. It's a strategic leadership problem.
The decisions landing on your desk — which AI tools to invest in, how deeply to embed them, how to structure workflows for resilience — are exactly the kind of decisions that benefit from peers who've already wrestled with them. Not a webinar. Not a conference session you forget by Monday. Structured, facilitated conversations with other firm leaders navigating the same uncertainty.
Tomorrow I'll show you how fast the window to build is closing — Anthropic's labor market data makes the timeline concrete. But the government risk covered today is what makes the how you build as important as the whether.
This is why I created the AI Champions Circle. A small group of CAS practice leaders — owners and internal champions — working through exactly these decisions together. Structured peer sessions, not a Slack channel. Curated AI intelligence briefings so you're not spending your weekends reading every announcement. Direct access to me between meetings when something lands on your desk that can't wait.
The firms moving fastest on AI right now shouldn't be doing it alone. They need a well-informed community that shares what's working, flags what isn't, and holds each other accountable for actually implementing — not just talking about it. That's the Circle. If you're making these calls in isolation, you don't have to. Join at theaiaccountant.ai/circle.

