Back to Blog

How to Audit the AI You’re Already Paying For

Most owners have more AI running in their business than they realize, and less of it working than they think. Before you add another tool, run this audit. It takes two hours and tells you exactly what to keep, what to fix, and what to cut.

I ran this audit on my own operation about a year ago and found three tools nobody on the team used consistently, two subscriptions that overlapped in function, and one workflow that had AI attached to it but still required a human to redo most of the output before it went anywhere useful. That is not a technology problem. That is a management problem. The tools did not fail. I failed to evaluate them properly before they became habits.

The AI spending problem in small businesses is not that owners buy too many tools. It is that owners stop evaluating the tools they already have. The demo works, the subscription starts, the team adopts it unevenly, and six months later nobody remembers why they signed up or whether it is doing anything measurable.

The audit I am going to walk you through takes two hours. It produces four outputs: a complete inventory of every AI tool in your stack, a usage score for each one, a decision on each tool (keep, fix, or cut), and a short list of what actually needs to happen next. Nothing fancy. A spreadsheet and honest answers.

Step One: Build the Full Inventory

Most owners undercount their AI stack by 30 to 40 percent. That is because AI is now embedded inside tools you already use and do not think of as AI tools. Before you evaluate anything, you need an accurate count of what you are running.

Pull every subscription from your credit card statements and bank account for the last 90 days. Write down every tool, not just the ones with “AI” in the name. Then go through this list and mark anything that has an AI feature, even if you do not use it:

  • Email platforms (AI writing assistants, smart send-time features, subject line optimization)
  • CRM software (lead scoring, conversation intelligence, email drafting)
  • Social media schedulers (caption generation, hashtag suggestions, performance prediction)
  • Design tools (background removal, image generation, auto-layout)
  • Customer service software (chatbots, ticket routing, response suggestions)
  • Project management tools (task prioritization, deadline prediction, workload balancing)
  • Dedicated AI tools (ChatGPT, Claude, Perplexity, Jasper, and any others you pay for directly)

Write down the tool name, what you pay per month, and who on your team uses it. Do not filter anything out yet. The goal is a complete picture before you start making decisions.

When I ran this step, I found AI features inside tools I had used for years without touching the AI functionality. An email platform with an AI writing assistant I had never activated. A CRM with lead scoring I had never configured. Two separate content tools that both offered AI caption drafting. I was paying for all of it and using almost none of it.

Step Two: Score Each Tool on Four Criteria

Once you have the inventory, score every tool on four questions. Use a simple 1–3 scale. One means it fails the criteria. Two means it partially meets it. Three means it fully meets it.

Adoption. How consistently does your team use this tool? One means sporadic use or nobody uses it. Two means some team members use it regularly. Three means it is part of a documented workflow that the whole team runs.

Output quality. Does the AI output from this tool require significant editing before it is usable? One means the output needs heavy rework most of the time. Two means it needs light editing. Three means output passes your quality standard with minimal review.

Measurable impact. Do you have a number that shows this tool is saving time, improving output, or generating revenue? One means you have no measurement. Two means you have a rough estimate. Three means you have a specific, tracked metric.

Overlap. Does this tool duplicate what another tool in your stack already does? One means it has significant overlap with another paid tool. Two means partial overlap. Three means it is the only tool doing this job.

The scoring shortcut: Any tool scoring 4 or below across all four criteria is either a cut candidate or needs an immediate fix plan. A tool scoring 10 or above is earning its place. Everything in between needs a decision, not a pass.

Step Three: Separate Dormant From Broken

This step is where most audits go wrong. Owners see a low-scoring tool and assume the tool is the problem. Half the time, the tool is fine. The problem is that nobody ever built a workflow around it.

A dormant tool is one your team never fully adopted. The features work. The potential is there. Nobody made it a habit. A broken tool is one your team tried to use, the output was consistently poor, and the tool failed its job regardless of how you prompted it or configured it.

The distinction matters because the fix is different. A dormant tool needs a workflow and an owner. A broken tool needs to be cut. Treating a dormant tool like a broken one means cutting something that has real value you never activated. Treating a broken tool like a dormant one means spending time building workflows around something that will never produce usable output.

Ask this question for every low-scoring tool: Is there a documented workflow for this tool, with a named person responsible for running it? If the answer is no, the tool is probably dormant, not broken. If the answer is yes and the team ran the workflow consistently and the output was still poor, the tool is broken.

A dormant tool needs a workflow and an owner. A broken tool needs to be cut. The difference between those two decisions is real money.

In my audit, I found two dormant tools and one broken one. The broken tool went immediately. The two dormant tools got 30-day pilots: documented workflow, named owner, weekly check-in. One became a core part of how we draft client deliverables. The other still produced inconsistent output after four weeks of genuine effort, so it went too.

Step Four: Make the Three-Column Decision

Every tool in your inventory goes into one of three columns: Keep, Fix, or Cut. No maybes. No “let’s revisit this later.” A decision now, even an imperfect one, is more valuable than a deferred one that costs you another month of subscription fees.

Keep means the tool scores well, has an active workflow, and has a measurable result. You keep it and you protect it by making sure the workflow that produces those results stays documented and owned.

Fix means the tool is dormant, not broken, and the gap is a workflow problem, not a tool problem. Fix means 30 days, one named owner, one documented workflow, one weekly check-in. At the end of 30 days the tool moves to Keep or Cut. No second Fix cycles.

Cut means the tool is broken, fully overlapped by another tool you are keeping, or dormant with no clear workflow path. Cancel the subscription this week. Not next quarter. This week.

The hardest decisions are the tools you paid a lot for or championed internally. Sunk cost is not a reason to keep a broken tool. The question is never “how much did we spend?” The question is “what does keeping this cost us every month going forward, and what does it produce?” If the production number is zero, the answer is Cut.

What the Audit Tells You Beyond the Tool List

The tool inventory is the surface output. The more valuable output is what the audit tells you about your adoption patterns.

If most of your tools land in the dormant category, the problem is not the tools. The problem is that your team does not have a system for turning new tools into operating habits. The fix is a standard onboarding protocol for every new tool: 30-day pilot, documented workflow, named owner, weekly check-in. That protocol prevents the next round of dormant tools before they accumulate.

If most of your tools land in the broken category, the problem is your evaluation process before purchase. You are buying tools based on demos and promises without running them against your actual workflow before committing. The fix is a pre-purchase checklist: does this tool solve a specific named problem, who owns it, what is the 30-day success metric, and what is the exit trigger if it fails?

If most of your tools are in the Keep column but your measurable impact scores are low, the problem is measurement. The tools work but you have not connected them to numbers. The fix is to name one metric per tool this week. Not a vague improvement. A specific, trackable number: email drafting time, proposal turnaround, hours per week on a specific task.

The One Pattern That Costs the Most

The most expensive pattern I see across the businesses I work with is the overlapping stack. Two or three tools doing the same job, each with partial adoption, each costing real money, none of them producing the results any one well-configured tool would produce on its own.

This happens because owners buy tools reactively. A vendor pitches a new capability. An industry peer recommends something. A newsletter covers a tool that sounds like it solves a problem. The purchase happens before anyone checks whether the current stack already handles that job.

The audit surfaces this overlap immediately. When you map out four content tools and two of them offer AI caption drafting, two of them offer scheduling, and one of them is the only one your team actually opens, the path forward is obvious. Consolidate to the one tool the team uses. Cancel the rest. Take the savings and put them toward the one remaining tool’s premium tier or a new tool that fills an actual gap.

Consolidation is not about spending less on AI. It is about getting more from the AI you keep. A team running one well-configured tool with a documented workflow and a named owner will outperform the same team running four tools with inconsistent adoption every single time.

Running the Audit This Week

Block two hours. Pull 90 days of subscription charges. Build the inventory. Score every tool on the four criteria. Separate dormant from broken. Make the three-column decision for each one. Cancel the cuts this week.

Then do one more thing: document what you are keeping and why. A short note next to each Keep tool: what it does, who owns it, what workflow it runs in, and what number you are tracking to know it is working. That note takes five minutes per tool. It becomes the reference point the next time someone asks why the team uses a specific tool, and it makes the next audit faster.

The goal is not a smaller stack. The goal is a stack where every tool earns its place every month. That standard, applied consistently, produces more measurable output from AI than any new tool you add to a stack full of tools nobody uses.

Learn, Grow, Repeat. If you want help running the audit or deciding which tools in your stack are worth keeping, that is the kind of work I do with clients.

Abel Sanchez

Abel Sanchez

AI Strategist & Marketing Veteran

Over 20 years building brands and systems. Partner at Starfish Ad Age and Starfish Solutions. Abel helps businesses implement AI that actually creates results — not just noise.

More about Abel →