The judgment gap: Why accountants are overestimating their moat

The judgment gap: Why accountants are overestimating their moat

The profession's security blanket is thinner than anyone wants to admit

This week I've been showing you what working with AI actually looks like in practice — adversarial dialogue that catches material errors, reflective processes that build institutional knowledge, the skill of telling AI it's wrong and capturing why. All of that depends on something the profession takes for granted: professional judgment.

I'm going to make a contention that most accountants won't like. If you spend any time on LinkedIn, you've seen the response — it shows up in every AI thread, every comment section where someone suggests that CAS work is changing. Always some version of: "AI can't replace professional judgment."

It's the profession's security blanket. And I think it's thinner than anyone wants to admit.

The line everyone's drawing

Sequoia Capital published a framework this month that accountants should take seriously. In "Services: The New Software," they draw a clean distinction between intelligence work and judgment work. Intelligence follows rules — even complex rules. Judgment requires experience, taste, and instinct built over years. Their thesis: the next trillion-dollar company will automate the intelligence layer and leave the judgment to humans.

That gives accountants exactly what they want to hear. Compliance is intelligence. Advisory is judgment. The routine gets automated, the humans keep the hard stuff. Clean. Comfortable. And dangerously imprecise.

Nate Jones — who writes about AI and organizational structure for technology companies — recently made a point that cuts through this comfort. His observation: what appears to be a single judgment-heavy role is almost always a composite of verifiable preparation work and a thin layer of genuine judgment on top. We bundled them together because one person doing both was organizationally efficient. The rules became invisible — and we started calling the result "judgment."

He's writing about software developers and product managers. But the insight lands directly in your practice.

Most of what you call judgment is rules you forgot you learned

Walk through your actual work. Transaction categorization on an ambiguous expense — judgment? Or a decision tree built from years of applying the same tax code to similar situations? Entity structure for a new client — judgment? Or a matrix of liability, tax, and operational factors that follows a learnable pattern? Materiality thresholds, revenue recognition, deductibility decisions — each one follows a framework you've internalized so deeply it feels like instinct.

Sequoia's own assessment: tax advisory is 80 to 90 percent intelligence work. Not because it's easy — because it's rule-based. The rules are complex, the combinations are vast, and the practitioner who's run the decision tree 500 times no longer sees the tree. They just see the answer. That's not judgment. That's pattern recognition.

The market is already pricing this in. Harvard Business Review published labor market data this month showing that job postings for routine, automation-prone roles have dropped 13% since ChatGPT's release — while demand for analytical and creative roles climbed 20%. Employers are paying for judgment and cutting intelligence. The shift isn't theoretical. It's in the hiring data.

Here's the part nobody talks about. Accountants aren't trained on professional judgment. You're trained on knowledge retention and rule application — CPA exam, tax code memorization, GAAP standards, firm SOPs. The entire professional development pipeline is built around learning rules and applying them to client situations. So of course most of us experience that application as judgment — it's the only version of "expertise" we were taught.

But internalized rules are still rules. And rules are exactly what AI learns.

Where judgment actually lives

The true judgment residual in a CAS practice is real — and it's more valuable than the rest of your work combined. But it's thinner than "professional judgment" implies.

It's the conversation where you tell a business owner their numbers are technically fine but the trend means they're 18 months from a cash crisis — and you calibrate the delivery based on three years of knowing how they handle bad news. It's the instinct that something in a new client's financials is wrong before you can articulate what. It's the decision to fire a client whose books are clean but whose behavior signals risk you can't yet prove. It's the advisory recommendation that costs you revenue short-term because it's right for the client long-term.

That's the wringable neck at its most concentrated. Your name on the file. Your relationship. Your courage to say the hard thing. And for most practitioners, it's 10 to 15 percent of their actual week — because the other 85 percent is spent on intelligence work they've been calling judgment.

Why this is the best news you'll hear this week

If most of your week is intelligence work disguised as judgment, that means most of your week is automatable — which means you're about to get it back.

The practitioners who understand this will encode their rules through context engineering, automate the intelligence layer, and redirect their time toward the work that was always the most valuable. The advisory conversation you never had time to prepare for properly. The client relationship you've been maintaining on autopilot because you were buried in workpapers. You get to be better at the thing that actually matters — because you finally have time to do it.

The firms that keep defending "professional judgment" as a blanket term will discover the distinction when a client asks why they're paying for intelligence work that an AI just replicated at a fraction of the cost.

Here's your homework. Pull up your calendar from last week — the real one, not the job description version. Go hour by hour and ask one question: was this genuine judgment — a decision under uncertainty where my experience was the only thing that got me there? Or was it intelligence — applying rules I've learned to a situation I've seen before?

Don't count the bank rec. Don't count the entity structure decision you've made 200 times. Count the moments where you genuinely didn't know the answer and your accumulated wisdom was the only thing that produced one.

That's your moat. Everything else is intelligence — and intelligence is exactly what AI was built for.

I built a structured version of this audit — a CAS-specific prompt that walks you through your entire week and forces the intelligence/judgment classification on every activity. It's available free below for subscribers.

Bonus content for subscribers

This post has additional content available to subscribers. Subscribe to access it.