- SaaS Weekly
- Posts
- Growth series: How SaaS companies are building toward GTM agents
Growth series: How SaaS companies are building toward GTM agents
A five-stage framework for mapping your GTM motion
ISSUE #4

Once a month, I write a long-form article covering growth strategies
Growth series: How SaaS companies are building toward GTM agents
Thinking in systems
I often approach problems with a "systems thinking" lens – break a scenario into its states, transitions, and dependencies, then figure out which interconnection is broken. When something isn't working, the answer is usually in the structure: a missing dependency, an undocumented rule, or an objective that isn't aligned.
I've found this especially useful for approaching go-to-market workstreams – where performance is determined by people, processes, and systems governed by layers of business logic.
If you've ever tried to establish, update, or simply understand one of your company's GTM motions (say, how leads convert to opportunities), you've faced the common challenge: where do you even start? Is it the messaging, the landing page, or alignment on who's a lead? Or is it downstream – does the enrichment, routing, or sales process have holes?
To be clear, my analytic lens is not an attempt to "systematize" or "automate" everything. Instead, it's a method for understanding the problem first.
What frustrates me is that the eagerness to automate (or now, "just use AI") often skips this step entirely: understanding the structure before trying to optimize it. As a result, it bottlenecks the path toward GTM agents.
The progression toward agentic GTM
During my early days at Statsig, I helped establish and refine the foundational GTM motion.
I've seen GTM plays like handling inbound leads go from bespoke manual processes to being enabled by data workflows and enhanced by LLMs. Today, as I look across the industry, I'm seeing more and more examples of workstreams orchestrated by AI agents. This opens the possibility for a not-so-distant future state: agent-managed systems.
The missing layer in GTM automation
What I observed across this progression is that the biggest blocker has always been the dependencies that enable the system to work – not the sophistication of the system itself. Three in particular:
Data readiness (is your data structured in a way that a workflow or agent can act on?)
Codified business logic (have you written down the rules for how leads progress, or does that live in someone's head?)
Defined optimization function (does the team agree on what they or the system should optimize for – speed vs. quality, capacity vs. revenue?)
Without those three dependencies in place, it's difficult to progress to more mature stages of automation.
Here's a framework to help map where your workstreams sit today, and what you might need to address to move them to the next stage.
The five stages of maturity
The diagram below maps the full progression – from manual processes to agent systems where the model operates autonomously.

Defining the orchestration layer
Before we walk through each stage, there are three modes worth defining. The entire framework hinges on one question: who controls the progression of work?
Manual: A person controls the progression. The user (a rep, GTM lead, etc.) decides what step is next, what tools to use, and when the task is done. Even if you're using ChatGPT to draft an email, this is still manual work. There is no orchestration layer.
Workflow: The system dictates the progression. Workflows are systems where actions and tools are orchestrated through predefined code paths – a fixed sequence of steps that achieve a defined output. An LLM might be embedded as a step, but the LLM is just a function call inside the larger structure.
Agent: The model determines the progression. Agents are systems where LLMs dynamically direct their own processes and tool usage. The model observes the state, decides the next action, selects tools, evaluates output, and chooses whether to loop again or terminate.
Note: applications that integrate LLMs but don't use them to control workflows (simple chatbots or API requests to LLMs) are not considered agents.
When moving across each stage
These stages are not mutually exclusive. You don't "graduate" from one and fully exit the others. A workstream might have sub-components operating within multiple stages at once.
Let's break down the stages and follow a single example (handling inbound leads) to contextualize the concepts.
Stage 1: Manual processes
This is where most GTM motions start – bespoke and fast-moving. The goal at this point is execution: test the approach, refine the steps, and figure out what works. But the manual processes don't scale until the process itself is written down, and the logic is externalized from the person doing the work.
Let's walk through the inbound lead example. A lead comes in – a form fill, an email, a LinkedIn DM. You (or an SDR) manually run the whole loop: look up the person on LinkedIn, check where they work, judge whether they fit the ICP, decide priority, draft outreach, and log notes (maybe). You might use ChatGPT to help write the email, but you're still driving the entire sequence. You decide what step comes next. You decide when to stop.

In this stage, the user controls the work. Data lives wherever you work – LinkedIn, your CRM, or a spreadsheet. Business logic is determined by human judgment, and it's often inconsistent until someone takes the time to document it. The optimization function is implicit: you're optimizing based on your own experience and intuition, which works when volume is low but breaks when you need anyone else to replicate the process.
Stage 2: Workflow automations
To scale a manual process, you standardize the inputs. You take the steps that were living in someone's head, codify them into a sequence, and build orchestrations to achieve the same outcome every time. This is the act of "automating" in its classical sense.
To follow the same inbound lead example: a lead arrives and the workflow triggers automatically. An enrichment step pulls firmographic data and title information through a provider like Clay. A grading step applies rules-based scoring. A routing step assigns the lead to the right owner based on territory, round-robin logic, or named account mapping – all configured within Salesforce or whatever your system of record is. Same lead attributes in, same routing and scoring out, every time.

In this stage, the system controls the work. The business logic is no longer in someone's head – it's embedded in the workflow itself. Each node represents a step, each step calls a specific action or tool, and the sequence is fixed. Data lives in your CRM and enrichment providers, structured and accessible. The optimization function is baked into the scoring and routing rules – the team's definition of "good" is now codified as lead stages and the handoff to sales.
Stage 3: LLM workflows
The shift from Stage 2 to Stage 3 is more subtle. You still have a workflow, but now you insert an LLM as a step node, which introduces variability in the output. The same input can produce a different result each time, because the LLM is nondeterministic (outputs are based on probability and may vary). The benefit is that you expand the input layer, because LLMs are great at handling unstructured data (text, transcripts, session logs) that rules-based automation struggles with.
For the inbound lead motion, the steps look like Stage 2 plus LLM nodes. Using tools like AirOps or Clay for LLM orchestration, you can enrich leads with contextual data that doesn't exist in a structured field – web intent data summarized into a narrative of interest (we did this at Statsig), or Gong transcripts from prior conversations on the account. The workflow is still fixed – enrich, summarize, score, route – but the LLM steps give the SDR context they wouldn't have had otherwise.

In this stage, the system still controls the work. The LLM doesn't choose what step comes next, doesn't decide to loop, and doesn't select its own tools. But this is where data readiness becomes more important. While LLMs are great at unstructured text, the data still needs a format to feed into – like serialized JSON objects. As you layer in more data points, you need to identify the right tables and join them together upstream.
Stage 4: Agent-enabled processes
Here, we introduce a paradigm shift. In Stages 2 and 3, the system governed the work – fixed paths, predefined steps. Now, the agent controls the loop. It can reason about what to do next, choose which tools to call, and decide whether to loop again or stop. The user still triggers the process and can intervene mid-loop to guide, correct, or approve – but the agent is the operator.
Following the same inbound lead example. Say you want to add product usage data as context for recent leads. Instead of manually writing a SQL query against your product tables, pulling the lead list, cross-referencing enrichment status, and stitching it all together – you prompt an agent through an app like Codex or Claude.
The agent then writes the query, pulls the data, kicks off the enrichment workflow if leads haven't been processed yet, joins the results, and helps draft outreach. A task that would have taken you an hour of manual work across three or four tools now runs inside a single agent thread. (For brevity: I won't get into the full anatomy of agents like skills, MCPs, tool auth, and loop design.)

In this stage, the agent controls the work, but the user controls the scope. Data lives across your CRM, warehouse, and tools – the agent accesses it through tool calls rather than fixed integrations. And this is where codified business logic becomes critical. The agent needs to understand how to move records across states and what to ignore.
In the lead example, you still only want to process qualified leads – the whole point of kicking off enrichment is to move unprocessed leads into a qualified state. If that logic isn't codified, you're providing it as context in the prompt every single time. If it is, you just tell the agent to run the flow and only grab the leads you want to work.
Stage 5: Agentic systems
At last, we arrive at the future state. The agent (or an orchestration of agents) governs the entire workstream. We're still in the early innings of this, but the designs are starting to emerge.
For the inbound lead motion, the most documented example I've seen is Vercel's inbound lead agent. After shadowing the highest-performing reps, Vercel's GTM engineers designed the entire workstream so an agent could manage it. The agent decides whether a lead is worth qualifying, determines what to say, pulls from internal data, and drafts a response. A rep still reviews and hits send.

In every prior stage, a user was close enough to the process to course-correct if priorities shifted. Here, the agent is making decisions autonomously inside a revenue system – and without clear internal alignment on what to optimize for, that gets risky fast.
In Vercel's case, the optimization function was explicit: inbound handling is a delegation layer, not a growth lever. The higher-impact work is outbound and strategic accounts. So the inbound agent is reallocating human attention toward that objective – not just routing leads faster.
Without that clarity, you risk building autonomous systems that optimize for the wrong thing. This is true at every stage, of course – but the cost is exacerbated here because the investment to build and the consequence of misalignment are both significantly higher. (If you're working toward this stage or experimenting with it, I'd love to hear about it!)
What's next?
Hopefully, this framing is helpful to map where you are – what elements of your workstreams are in what stage, and what the underlying structure and dependencies look like.
In the next write-up, I'll get more tactical on building GTM agents.
That’s all for now.
Cheers,
Ian at SaaS Weekly
Was this worth reading this week? |
Thank you for reading this Friday's SaaS Weekly Roundup! Let us know what you thought about this week's articles by replying to this email.