Back to blog

    Not All AI Agents Are Created Equal: The 3-Category Test Before You Build Anything

    Alex R.8 min
    AI agents for businessAI automation frameworkn8n automationSME AI strategy

    Here's a conversation I've had three times in the past month.

    A business owner tells me they want to "build an AI agent" for their company. I ask what it should do. They describe something like: read incoming client emails, pull relevant info from their knowledge base, draft a reply, and flag anything unusual for human review.

    That's a perfectly good use case. Here's the problem: in half of these conversations, the business owner then shows me a LinkedIn post about LangGraph or CrewAI and asks if they should build with that.

    The answer is almost always no. And there's a framework that explains why better than I've ever managed to on my own.

    The Three Categories That Change Everything

    Here's a pattern I've seen confirmed across dozens of real implementations, from large enterprise teams to 20-person companies:

    Every AI agent idea falls into one of three architectural types. And which type it is determines nearly everything: how long it takes to build, what it costs, which tools to use, and how you measure success.

    Category 1: Deterministic Automation You define every step. AI handles the content at specific points. Think n8n, Zapier, Make.com. You're building an intelligent flowchart.

    Category 2: Reasoning Agents (ReAct) AI decides what to do next. You provide the tools; the model controls the logic. Think LangGraph, CrewAI, Claude Code.

    Category 3: Multi-Agent Networks Multiple specialised agents coordinate with each other. Multiple teams, multiple domains, enterprise-scale.

    The Crucial Insight for SMEs

    Here's the number that should change how you think about your first AI project: 60-70% of business agent opportunities are Category 1.

    Not the flashy autonomous reasoning agents you see on social media. Workflow automation with AI nodes.

    Category 1 projects:

    • Launch in 4-8 weeks (not 3-6 months)
    • Can be built by a product manager using no-code tools
    • Cost 50-500 euros per month to operate (not 5,000)
    • Deliver measurable ROI immediately

    The email automation example I described at the start? Category 1. Every step is predictable: receive email, classify intent, retrieve relevant docs, draft reply, route for approval. You can draw that as a flowchart. That makes it Category 1.

    Where Teams Go Wrong

    The most common failure mode is trying to solve Category 1 problems with Category 2 tools.

    You've read about LangGraph. It sounds powerful. You use it for your email automation agent. Now you have a system that autonomously decides how to handle emails, which is overkill for a process you could have mapped in a flowchart. It takes 3 months to build instead of 6 weeks. It produces unexpected outputs on edge cases you never predicted. It costs 10x more to run.

    The opposite error is less common but more dangerous: using Category 1 tools for genuinely Category 2 problems. Your flowchart has 35 branches and you keep adding new ones every week because users phrase things you didn't predict. That's a sign you need ReAct, not more branches.

    The 5-Minute Triage Test

    Ask this about any agent idea:

    "Can I draw the complete process as a flowchart with fewer than 20 decision points?"

    If yes: Category 1. Build it in n8n or Zapier. Ship in 6 weeks.

    If no, ask: "Does the same user request trigger different action sequences every time based on context?"

    If yes: Category 2. Use LangGraph or similar. Budget 3 months.

    If you have multiple departments each needing their own specialised agent that must coordinate: Category 3. You're probably not there yet.

    A Real ROI Example

    Here are real metrics from a Category 1 email support agent built for a SaaS company:

    • Week 1: 52% completion rate (edge cases discovered)
    • Week 4: 78% completion rate (classification refined)
    • Week 8: 87% completion rate (production-ready)
    • Result: 3,000 support emails/month automated, $18K/month savings

    That's Category 1. n8n. Six weeks to deploy. No ML engineers required.

    What to Do With Your Current Roadmap

    Pull out your list of AI agent ideas. For each one, apply the triage test. At least half of them are probably Category 1 masquerading as Category 2.

    Prioritise the Category 1 items. Ship one. Measure the ROI. Use that win to fund the next project and build confidence across your team.

    The point isn't to avoid sophistication forever. Category 2 and 3 projects genuinely deliver things Category 1 can't. But starting there before you've proven value with simpler automation is how you end up with an expensive AI initiative that nobody uses.

    Start simple. Measure results. Scale what works.

    Related posts