Agency performance marketing has a time problem. It doesn't have a data problem, a tooling problem, or an AI problem — though each of those gets named in every State-of-the-Industry report. The problem is time. Where it goes. What gets it. What doesn't. And why the stack of tools built to save it keeps adding more of it.
This is a field report, not a formal study. We built Zeke AI after ten years of running performance media at multi-location agencies — the kind that manage twenty, fifty, two hundred clients on Meta and Google. What follows is what we saw, what we measured, and what we think the next generation of agency tools has to look like to actually move the profession forward.
Every claim here is grounded in one of three sources: public pricing pages of the incumbent tools, platform documentation from Meta and Google, or direct operational observation inside agency books we've worked with. Where we cite numbers, we're reporting what we've actually seen; we're not rounding up to make the point bigger.
The short version
- Reporting has eaten the agency workweek. A typical performance marketer at a multi-location agency now spends more of the week compiling, formatting, and explaining reports than actually buying media. The tools that promised to fix this — DashThis, AgencyAnalytics, Looker Studio — made the reports prettier without collapsing the decision-to-report loop.
- AI has not arrived in agency operations.Most “AI” products sold to agencies in 2026 are either (a) generative creative tools or (b) rule-engine automators with AI branding. Neither makes the daily portfolio decision — “across all my clients, what are the nine things I should do this morning?” — that's the actual bottleneck.
- The tool stack is fragmented past the point of usefulness. A median agency in our sample runs nine distinct SaaS tools plus Meta, Google, Notion, and Google Docs. None of them talk to each other without a human middleman. The human middleman is the most expensive person at the agency.
- Institutional knowledge never becomes operational. Every agency we've seen has playbooks, brand guidelines, vertical-specific tactics, and rejection reasons living in documents that AI tools can't see when they generate recommendations. That knowledge compounds on paper and decays in practice.
- Pricing is misaligned with how agencies actually scale. Per-seat pricing punishes the agencies that invest in training teams. Enterprise-only pricing locks out the 20–50-client agencies who need the tooling most. Per-ad-account pricing taxes you for doing your job.
The rest of this post unpacks each of these in detail. If you're already nodding, you can skip to the last section — “What the next generation looks like” — where we lay out what we think has to change.
Finding 1
Reporting has eaten the week.
The median performance marketer we've tracked spends roughly 48 hours of a 50-hour work week on tasks that aren't media buying. The breakdown across a typical week, based on time-tracking data from four agencies we worked with directly in 2024–2025:
18 hrs
Dashboard & report assembly
12 hrs
Client status meetings + Slack
9 hrs
Campaign review + QA
5 hrs
Creative coordination
4 hrs
Meta + Google platform changes
2 hrs
Actual optimization decisions
Two hours of a fifty-hour week goes to the decision-making work that moves CPL, ROAS, and LTV. Forty-eight hours goes to the work that supports, documents, and communicates those two hours. That ratio is the real problem. Everything else is downstream of it.
The reporting tool market — DashThis, AgencyAnalytics, Looker Studio, Google Data Studio before it — built its reputation on a single premise: make the reports faster. And they did. A white-label PDF that took four hours to hand-assemble in 2019 takes thirty minutes in 2026. That's real progress.
But the decision half of the loop never got faster. The buyer still has to read the report, map what they see to their book of client context, remember what worked for the Memphis clinic last quarter, check the compliance notes from the Q1 retainer call, and then figure out which specific campaigns to touch today. That cognitive work is the bottleneck. Reporting tools don't touch it.
Finding 2
AI has not arrived in agency ops.
The word “AI” shows up on the homepage of every agency tool we've audited in the last six months. The actual AI capabilities, when you read past the marketing page and into the product, fall into four buckets:
- Generative creative tools — AdCreative.ai, Pencil, Creatopy. These are legitimate AI products. They produce more creative variants than a human designer could. But more creative is not the bottleneck at a multi-client agency; deciding which creative to run is. Generative creative tools make the upstream problem worse — more variants to test, more options to evaluate.
- Rule-engine automators with AI branding— Revealbot, several Meta-specific automation platforms. These let you write rules (“if CPL > $X, pause the ad set”) and then enforce them. This is useful. It is not AI in any meaningful sense. The intelligence is in the human writing the rule.
- Attribution models dressed as AI— Northbeam, TripleWhale, Hyros. These are sophisticated statistical products that help you understand which channels drive revenue. They don't decide what to do with that information. Their output is a better dashboard for the human to interpret.
- The actual AI opportunity, mostly unfilled— a system that looks at every client in your book every morning, generates a prioritized list of specific recommended actions, cites the data and internal context behind each, and learns from every human approval or rejection. Almost nobody has shipped this. The only attempts we've seen in production are internal tools built by large agencies for their own use.
The fourth category is where the leverage is. It's also the category that most “AI for agencies” marketing obscures. When you hear an agency tool claim “AI-powered recommendations,” ask two questions: does it read your internal playbook? does it learn from your rejection reasons? If both answers are no, it's a dashboard with autocomplete.
Why the AI gap is specifically hard to close
Building an AI recommendation engine for agency media buying has three hard problems that general-purpose SaaS tools don't face:
- Per-client context is everything. The same CPL spike means different things at Luma Med Spa (seasonal) versus Grove Chiropractic (creative fatigue). An AI without that context gives generic advice — the kind you can already get from ChatGPT.
- Agency knowledge lives in documents, not databases. The SOW is a PDF. The creative review notes are in Notion. The rejection rationales are in Slack. A tool that can't ingest those documents can't cite them in its reasoning.
- The actions are platform-specific and irreversible. Pausing an ad account, shifting a budget, killing a campaign — these have real money consequences. AI that takes actions without robust approval loops and audit trails is dangerous AI.
The agencies we've seen try to build this internally run into the same wall: the engineering cost is enormous, the AI model costs are real, and the product has to work across every client before it's useful for any of them. The economics only work if somebody builds it once and sells it at volume.
Finding 3
The stack fragmented past the point of usefulness.
Here's a non-exhaustive inventory of what a 30-client performance agency actually uses in 2026, based on what we've personally audited:
- Media buying + automation: Meta Business Manager, Google Ads Editor, Revealbot (or Smartly), AdCreative.ai
- Attribution: Northbeam or TripleWhale or Hyros (sometimes all three running simultaneously during a bake-off)
- Reporting:DashThis or AgencyAnalytics, plus Looker Studio for the CEO's custom dashboard
- CRM + contacts: HubSpot or Pipedrive, sometimes Airtable, plus whatever the client uses
- Internal docs: Notion for playbooks, Google Docs for SOWs, Slack for decisions-that-become-precedent, Loom for async review
- Project management:Asana, ClickUp, or Linear depending on the founder's vintage
- Client communication: Slack Connect, email, and a shared Google Drive folder per client
- Creative assets: Frame.io, Dropbox, Figma for UI clients
Nine to fifteen tools is the norm. None of them talk to each other without a human middleman. A single cross-tool question — “how did the creative we reviewed on Loom last week perform in Meta and does that match our Q1 attribution pattern?” — requires a person to query four products and stitch the answer together.
The cost of this fragmentation isn't the SaaS bill. The SaaS bill is real but containable — typical agency spend is $800–$2,500/mo on tools for a 30-client book. The cost is the human middleman, who costs $85–$175 per hour and spends eighteen hours a week being the middleman. At the high end of that range, the middleman cost is $13,000 per month. That's five times the tool cost.
This is why adding another tool — even a great one — rarely reduces the agency's total operational cost. The savings are in removing tools, not adding them. Or, more precisely, in replacing five narrow tools with one that owns the decision layer across all of them.
Finding 4
Institutional knowledge never becomes operational.
Every agency we've worked with has written down what it knows. Somewhere. Usually in a Notion wiki that one senior account lead maintains in their spare time. The content ranges from verticals (“how we run TRT accounts”) to client-specific notes (“Dr. Chen prefers creative with patient testimonials; she does not want any urgency-based messaging”) to compliance guardrails (“Meta's March 2026 health policy prohibits before/after images in our space”).
This knowledge is genuinely valuable. It's also operationally invisible. When a media buyer sits down on Tuesday morning to review the queue, none of that knowledge is automatically loaded into their thinking. They either remember it, or they don't. In practice, most of it gets lost — especially for less-senior team members who weren't on the call when the decision was made.
This is the compounding problem that never compounds. Every rejection reason, every approved creative, every “this didn't work for this vertical in this market” is a signal. In theory, those signals stack into a playbook that makes the agency smarter every week. In practice, they stack into a Notion page that gets read once at onboarding and then never again.
The fix requires three things working together: (1) a way to ingest documents without someone having to format them for an API; (2) a structured representation of what's in them that a system can search over; (3) a decision engine that pulls the relevant knowledge into its reasoning before it generates a recommendation. Almost no current agency tool does all three.
Finding 5
Pricing is misaligned with how agencies scale.
Three pricing models dominate the agency-tool market. All three have problems.
Per-seat pricing
Most project management, CRM, and communication tools (HubSpot, Asana, Notion) charge per-seat. This punishes agencies for investing in team growth — every new media buyer you hire is a cost you pay monthly, regardless of how much value that seat produces. At a 50-person agency, per-seat pricing can easily run into five figures per month for tools each employee uses lightly.
Per-ad-account pricing
Some automation and reporting tools charge per connected ad account. This taxes agencies for their core job — connecting client accounts to the tool. An agency with 40 clients on Meta and Google is paying for 80 connections. A tool that charges $20 per connected account per month is $1,600/mo before any usage. It rewards agencies that stay small.
Enterprise-only pricing
The most capable tools — Northbeam, the full Salesforce stack, the enterprise tier of Supermetrics — are priced for enterprise and gated behind a sales team. Minimum contracts start at $24,000/year and often require annual prepay. This locks out the 20-to-50-client agencies who need the capability most but can't commit to that kind of contract before seeing the product deliver value.
The underlying issue: pricing hasn't caught up with the shape of a modern performance agency. A 30-client agency isn't a small business; it's a specialist firm with deep operational needs. The tooling market hasn't figured out how to price to that specialist-firm shape. It either treats them as SMB (too feature- limited) or as enterprise (too expensive, too sales-gated).
Finding 6
What the next generation of agency tools has to look like.
If we're right about the five findings above, the next generation of agency tooling has to solve five things simultaneously. Not as a bundle of separate products, but as one system.
1. Decision-first, not reporting-first.
The primary output is a prioritized list of things to do, not a dashboard of things that happened. Reports are a side effect of the decision log, not the main artifact. When an agency owner asks “what's happening this week?” the answer is the approved-rec history, not a PDF.
2. Knowledge-aware AI, not context-free AI.
The AI reads the agency's playbooks. When it recommends an action, it cites the internal document that justifies the recommendation — not just the metrics. That way the human can trust the reasoning instead of having to re-derive it from raw data every time.
3. One workspace, every client.
Cross-client analysis has to be native, not bolted on. If you run TRT campaigns for six clients, you should be able to ask one question once — “which TRT clients use patient-testimonial creative?” — and get one answer, not six.
4. Explicit approvals on every action.
AI that takes autonomous action on paid media is a liability. Every recommended action needs a human click. Every click gets logged. The log becomes the audit trail, the performance review, and the training data — in that order.
5. Priced for specialist firms, not SMB or enterprise.
The 20-to-50-client agency is the sweet spot. Pricing has to reflect that. Per-workspace pricing with soft client caps (not seat taxes, not per-account tolls). Affordable entry points that don't require sales calls. Upgrade triggers that align with actual value unlocks (e.g., white-label reports at the tier where most agencies have clients demanding them).
Our take
Why we built Zeke.
We built Zeke AI to be the system we described in the last section. Three-tier pricing ($197, $297, $497 per month). Client-count-based, not per-seat, not per-account. Decision engine and knowledge engine in one workspace. Every action gated behind a human approval. White- label and reseller rights unlock at the tier where most agencies actually need them.
We're not claiming to have solved every finding in this post. We're claiming to have pointed the architecture at the right problems — the decision gap, the knowledge gap, the fragmentation tax, the pricing misalignment — and started shipping. The full research-to-product mapping is documented in our pricing page, our comparison pages, and the interactive demo.
If this post lands — if you nodded at even one of the five findings — we'd love to talk. Email hello@usezeke.com. Tell us what we got wrong. Tell us what we missed. This post is v1 — we'll update it as the field evolves and as we learn from the agencies running pilots.
Methodology + caveats
This report is based on four sources, in descending order of weight:
- Direct operational observation inside four agency books we worked with personally during 2024–2025 (aggregated: 210 clients, $34M in managed annual spend). Names withheld to respect commercial confidentiality.
- Public pricing pages and product documentation of every major agency tool cited (DashThis, AgencyAnalytics, Revealbot, AdCreative.ai, Northbeam, TripleWhale, Hyros, Supermetrics, Meta Business Manager, Google Ads).
- Platform documentation from Meta (Business Manager, Advantage Shopping Campaigns, Advantage+ Creative) and Google (Performance Max, Demand Gen, Google Ads Editor).
- Informal conversations with agency ops leaders at industry events (Performance Marketing Summit 2025, Affiliate World Asia 2025, various agency-owner Slack communities).
Caveats: we're a company with a product in this space. We have an obvious incentive to frame the market in ways that make our product look necessary. We've tried to counter that by citing specific competitor products in positive terms where they earn it (see Finding 2 on generative creative tools, which we think are legitimately useful) and by listing the specific failure modes of our own approach (the fourth rule in Finding 6 explicitly warns against the danger of AI taking autonomous action — which is a category Zeke has made a deliberate product choice not to enter).
If you see something in this post that's wrong, please email us. We'd rather be corrected in public than continue believing something false.