Field notes from the operation.
Working papers on Transfer of Experience and AI agents — shipped by teams running agents in production.
What work should move to an AI agent first?
A five-part filter for choosing the first AI agent workflow: frequency, value, clarity, risk, and learning.
The first AI agent workflow should not be the flashiest one. It should be the one that repeats often enough, costs enough, and can be checked by a human without creating more work than it saves.
This is the filter we use before recommending a managed agent.
Frequency
If the task happens once a quarter, it is usually not the first agent. The setup cost may be higher than the benefit.
Look for weekly loops:
- meeting prep
- follow-up drafting
- proposal assembly
- opportunity research
- client status updates
- invoice and receivable follow-through
- knowledge-base updates after delivery
Frequency matters because agents improve through repetition. One-off work does not create enough correction data.
Value
A task can be frequent and still not worth automating. The question is what the mess costs.
A good first workflow has one of these costs:
- senior time gets burned on coordination
- leads go cold because follow-up slips
- client trust erodes because updates are late
- delivery quality varies because context is scattered
- cash gets delayed because finance admin is fuzzy
If the cost is visible, the agent has a job. If nobody can name the cost, pause.
Clarity
Can the firm describe good output?
For example, "draft follow-up after an intro call" is still too vague. Better:
- summarize what the prospect said
- identify the next promised action
- draft a short email in the partner’s voice
- cite the source note used
- mark anything uncertain
- wait for human approval before sending
Clarity is what turns AI from a chat box into an operating role.
Risk
The first agent should not be allowed to create reputational damage on its own.
The safest early pattern is draft, review, approve. The agent can prepare the work. The human decides what leaves the building.
Risk goes up when the task touches:
- client advice
- legal or financial interpretation
- hiring or people decisions
- sensitive partner relationships
- public claims under the firm’s name
Those workflows can still use agents, but not as the first unmanaged experiment.
Learning
A workflow is a better candidate when every approval, edit, rejection, and result teaches the system something.
Useful learning signals:
- the partner approved the draft unchanged
- the partner edited tone but not substance
- the partner rejected the suggestion because timing was wrong
- the client replied positively
- the opportunity moved forward
- the task was irrelevant because the source data was stale
That data becomes the improvement loop. Without it, the agent stays generic.
A simple scoring pass
Give each workflow a score from 1 to 5 for frequency, value, clarity, risk control, and learning. Do not overthink the math. The conversation is the point.
A good first candidate usually looks like this:
- frequency: 4 or 5
- value: 3 to 5
- clarity: 4 or 5
- risk control: 4 or 5
- learning: 3 to 5
If clarity is below 3, write a better job description. If risk control is below 3, add an approval gate or choose another workflow.
Where this points
For many consulting firms, the first agent ends up in business development or meeting follow-through. That is why our first public wedge is Bob, the business opportunity builder. It sits close to revenue, but it does not need to replace the human relationship.
If you want to test your own workflow list, start with the AI Jungle Assessment. Bring the messy work. The useful answer is usually hiding there.