This week wasn't about new tools. It was about which of the assumptions you've been running on are still standing.
KPMG cut 100 of its 1,400 US audit partners. OpenAI shipped GPT-5.5 six weeks after GPT-5.4 and reclaimed the frontier. Salesforce, OpenAI, Microsoft, and Google all shipped agent-first infrastructure in the same five-day stretch. And Anthropic published a post-mortem confirming Claude Code, the Agent SDK, and Cowork were silently degraded for six weeks while the firms building on them didn't know.
Three threads, one pattern. The safe harbors are disappearing.
Partner equity is no longer a safe harbor
KPMG announced this week that 7% of its US audit partners and managing directors are out — after multi-year voluntary retirement programs failed to hit productivity targets. The firm's statement tied the cuts directly to "the power of our audit platform" — code for the AI tools KPMG and its competitors have spent billions building.
Read that against the rest of the month. PwC restructured around PwC One. Accenture is cutting 11,000 with Julie Sweet's "compressed timeline" comments hanging in the air. EY and KPMG in the UK are quietly demoting equity partners to salaried roles. Deloitte cut PTO by 5–10 days for most US employees and froze pension accruals after 2026.
Three weeks, four firms, one direction.
If you're benchmarking your succession plan against partnership-track economics, that benchmark is moving fast. Earlier this month we wrote about the K-shaped valuation problem — recurring revenue alone isn't the moat buyers price in, and the gap between AI-enabled firms and legacy compliance shops is widening. KPMG adds a data point. Pricing compression is hitting valuations, and the partnership model itself is under threat — first from venture capital rolling up the category, now from inside the Big 4.
One wildcard. As partners get pushed out, some may look to buy into smaller practices. Don't bank on it — they may also start their own books, get scooped up by PE, or just retire. Plan for the displacement, not the lifeline.
Single-vendor strategy is no longer a safe harbor
OpenAI shipped GPT-5.5 on April 23 — six weeks after GPT-5.4. On the independent tests used to compare AI models, GPT-5.5 now leads Anthropic's Claude Opus 4.7 on coding tasks, knowledge work, and complex reasoning by meaningful margins. Sam Altman's launch line was deliberately understated: "We hope it's useful to you. I personally like it."
Pay attention to the cost shift. AI providers charge by the word — both for what you send the AI and what it returns. GPT-5.5's per-word rate runs about 20% above Opus 4.7's. But GPT-5.5 uses roughly 40% fewer words to finish the same task, so the actual bill comes out lower. Independent testing put GPT-5.5 at about a quarter of the cost of Opus 4.7 to complete the same work. The right comparison isn't the rate on the price card — it's what it costs to finish the job.
For firms that picked OpenAI as their standard, the temptation is vindication. It isn't. The same week, Anthropic published a post-mortem confirming engineering errors silently degraded three of its products — including Claude Cowork, the version CAS practices are increasingly building on — between March 4 and April 16. The model was set to think less than it should. It lost its memory between conversation turns. It was told to keep responses under 100 words. All reverted, but firms running production work on those products spent six weeks wondering why their AI got worse.
The lesson isn't which model to pick. It's that betting on a single vendor is the wrong bet right now. Models leapfrog every six weeks. Vendors degrade quality silently. The firms quietly winning run two models in parallel — Claude Opus to plan and GPT-5.5 to execute is the pattern emerging — and treat model choice as a workflow design decision, not a procurement decision.
I walked through the three tests every CAS firm needs to detect this kind of vendor drift before it shows up in a client deliverable.
Per-seat pricing is no longer a safe harbor
Four platforms shipped agent-first infrastructure in five days. Salesforce Headless 360 exposes its entire stack via API, MCP, and CLI — co-founder Parker Harris asked publicly, "why should you ever log into Salesforce again?" OpenAI's Workspace Agents launched as the Codex-powered successor to Custom GPTs. Microsoft's Hosted Agents in Foundry run multi-model — OpenAI, Anthropic, Meta, and Mistral. Google relaunched Vertex AI as Gemini Enterprise with Agent Studio, Registry, Identity, and Gateway.
Agents are becoming first-class users of enterprise software, and per-seat pricing doesn't survive contact with that reality. Aakash Gupta's question is the one your vendors are about to face: once one agent runs the same workflow across Salesforce, Jira, and Slack, why pay $30/seat for AI inside five separate products?
Now look at your stack. Karbon, Financial Cents, Ignition, Dext, HubSpot — every practice management and AP/AR platform you pay per seat for is built on the assumption that just got challenged. Platforms trying to wall off their data and be all things to all people are the ones most exposed.
The contrast on the ledger side is sharp. Digits — the AI-native general ledger — shipped a free MCP server this week with no plan restrictions, plugging into Claude, ChatGPT, and Cursor. Two weeks earlier they introduced outcome-based pricing: $0 per client where the platform doesn't reach 95% Zero-Touch Transactions. Read them as a single thesis — the value of an AI-native ledger isn't platform fees, it's the data the platform produces. They're subsidizing access and exposing the data wherever the practitioner works.
Walled gardens are a recipe for losing relevance. Open data through MCP and CLI is how a platform earns its place in your stack. Watch which side your vendors pick.
Quick hits
Sage + PwC + Sage HCM. PwC's agentic delivery model is now embedded into Sage Intacct deployments — the first mid-market accounting partnership for the consulting firm that shipped PwC One last week. Sage HCM also launched, integrating HR, payroll, and finance with an HCM Agent and a Construction edition. Sage Future runs April 28–30; expect more during the conference.
Productivity winners are most worried. Anthropic's follow-up to its 81K-user Claude survey found workers getting the biggest productivity lift are also the most worried about losing their jobs to it — three times more than workers whose jobs use Claude least. Early-career staff voiced the loudest fears. If your senior staff use Claude every day and you've never had the conversation, have it next week.
Thomson Reuters 2026 AI in Professional Services report. 62% of professionals use generative AI daily; 34% of tax firms deploy at organizational level (up from 21% year over year); only 19% measure ROI. The gap between individual use and firm-level deployment is the bottleneck. Friday's piece went deeper on what the report actually says about what your clients want from you — and what staying silent about your AI work is costing you.
Google's supporting cast. Google split its TPU into training (TPU 8t) and inference (TPU 8i) chips — formal acknowledgment that inference demand has overtaken training. AI Overviews shipped in Gmail for Workspace users. Sundar Pichai confirmed 75% of Google's new code is AI-generated, up from 50% in February. That two-month jump is the canonical example of "the gap closes in waves, not gradually."
Stop assuming permanence
Three major stories, three different directions, one through-line. Partner equity, vendor relationships, platform pricing — none of these are stable assumptions right now. The firms that come through this aren't the ones picking the right horse, the right vendor, or the right platform. They're the ones taking responsibility for their own direction and staying ready to pivot the moment the ground shifts.
That's not a comforting message. It's the right one.
And Saturday's Part 3 of the Agent Builder series is now up — the single highest-return artifact you can build in your AI stack right now, and the one that lets every agent you build for one client carry over to the rest of your book just by swapping a file.
If you want a structured way to build a practice designed to adapt rather than absorb shocks, that's what AI Practice Transformation is built for. Take a look.

