A reader asked me a sharp question after last week's article: what's the most common mistake CAS practitioners make when they build their first agent? It's not a technical mistake. It's a selection mistake. Knowing where to place your first AI agent determines success or failure. Practitioners pick the most painful task in their practice — the three-hour monthly reconciliation, the transaction coding backlog, the bank feed exception queue — and they point the agent at it. That task is almost always the wrong first target, because the most painful tasks in CAS work are usually the ones that live deepest inside your accounting platform.
Pain isn't the signal you think it is. Describability and independence are. Your first agent should be your easiest win, not your biggest problem.
This is Part 2 of a five-part series on building AI agents for your CAS practice. Part 1 covered what an agent actually is. This piece covers where to place your first one.
You can build agents inside your platform — but don't start there
Let me be straight about something the hype machine gets wrong, because I don't want to sell you a convenient lie. It is not true that AI agents can't work inside Xero, QuickBooks, or whatever platform you're running. They can. Browser automation reaches into the user interface. Limited API access opens some doors. Computer-use models — the kind that can click, type, and scroll on your behalf — are advancing quickly. Working systems exist, and they're getting better.
But those systems take experience to build. Every vendor update breaks them. Every edge case is yours to solve. Every authentication flow, session timeout, and unexpected popup is a new thing you have to teach the agent to handle. You're not building one agent — you're building one agent and a small maintenance practice to keep it running.
That's a reasonable place to end up. It's a terrible place to start. A first agent that takes six weeks and half-works will convince you — and your team — that AI isn't ready yet. A first agent that works on the first Tuesday afternoon you try it will change your relationship with the technology permanently. You are picking a proving ground, not a production system. Pick the one you can finish.
Three zones, in order of difficulty
Agent opportunities in a CAS practice fall into three zones. You work through them in order — not value, difficulty.
Zone 1 — work that lives entirely outside the platform. Engagement letters, client follow-up emails, onboarding checklists, meeting prep notes, advisory one-pagers, internal SOPs, workpaper commentary drafts, and training documentation. None of it touches your accounting platform. Text in, text out. This is where last week's engagement letter example sat, and it's where your first five agents should sit too.
Zone 2 — data-out workflows. You export a report from your platform — trial balance, AR aging, bank rec, P&L with comparatives — and hand it to AI for analysis or transformation. The agent never touches the platform directly. It only sees the file you've already pulled. Variance commentary, close narratives, ratio analysis, board packet summaries, and exception flagging all live here. Data moves one way: out of the platform, through the agent, back to you.
Zone 3 — inside the platform. Browser use, computer use, direct API calls. This is eventually where you want to be for tasks like bulk transaction categorization, bank feed reconciliation, and automated journal posting. But Zone 3 is where you go once you know what you're doing. It isn't where you learn.
Most practitioners skip Zones 1 and 2 entirely and start in Zone 3. Then they wonder why AI "doesn't work for accounting."
Why accountants pick the wrong first target
Accountants are trained to attack the biggest risk first. In audit, you start with the material items. In tax, you focus on the high-exposure positions. In monthly close, you investigate the largest variances. The entire profession runs on "work the problem that matters most."
That instinct is useful in client work. It's counterproductive in tool adoption. When you're learning a new way of doing the work, the biggest problem in your practice isn't your starting point — it's your proving ground. And you can't prove anything on a project that takes two months to fail.
Momentum compounds. Frustration kills. The practices that succeed with AI are the ones that stack wins — five Zone 1 agents running before they touch Zone 2, and five Zone 2 agents running before they attempt Zone 3. The wins teach the skill. The skill is what eventually solves the hard problems.
Try this now — 15 minutes, one client
Pick one of your regular clients. Open a blank document and list every step you or your team completes for that client on a recurring engagement — a monthly close, a quarterly review, an annual tax return, or any other task you do for them on a cycle. Start with the first input that arrives and end with the finished deliverable hitting the client's inbox.
Now tag each step Z1, Z2, or Z3. Z1 for work that happens entirely outside the platform. Z2 for work that uses exported platform data but runs outside. Z3 for work that requires reaching into the platform itself.
Count your Z1 items. Those are your first five agents. Count your Z2 items. Those are your next five. Ignore Z3 for now — you'll come back to it in two months when the first ten are running.
If you want a prompt to run the audit, paste your workflow list into any AI tool and ask: "Classify each of these steps as Z1 (fully outside my primary platform), Z2 (uses data exported from the platform), or Z3 (requires direct interaction with the platform). For every Z1 and Z2 item, suggest what the finished agent output would look like."
Start where the door is already open
Your first agent isn't the one that solves your biggest problem. It's the one that proves the approach works in your practice. Three wins in, the shape of what's possible looks different — and the hard problems start looking more solvable.
The walled gardens aren't going anywhere. They'll be there when you're ready to work inside them. Start where the door is already open.
Next up in the series: the client context file. One document that makes every agent you build for this client better — and that lets you take the same agent to any other client just by swapping files.
If you're ready to move from proof-of-concept to a running practice of AI agents, AI Essentials is the guided platform that takes you from instruction to working agents to systems of agents. Get started at theaiaccountant.ai/essentials — the platform, curated workflows for your firm size, guided onboarding, and a monthly live implementation call where we work through what's working in your practice and what's next.

