How to Run an AI Audit (The Exact Process I Use With Clients)
I ran five AI audit interviews in one week across completely different industries. A coffee importer. A fintech platform. A services business scaling past 20 people.
The problems were shockingly similar.
Not the industries - those couldn't be more different. But the patterns underneath? Almost identical. The busiest person on the team is drowning in manual work. Nobody has time to figure out what to automate. And everyone has preconceptions about what AI can and can't do that are wildly out of date.
That's what an AI audit actually reveals. Not a list of tools to buy. A map of where your team is bleeding time - and the specific places where automation will give you that time back. Here's the exact process I use.
Why most "AI strategies" fail before they start
Most companies approach AI backwards. They start with the technology. Someone reads about a new tool, gets excited, and tries to jam it into an existing workflow. Or worse - leadership announces an "AI strategy" without understanding where the actual problems are.
The number one reason AI implementations fail is that nobody mapped the real pain points first. They built a solution for a problem they assumed existed, rather than the one that actually does.
I've seen this pattern dozens of times. A founder hears about an AI tool at a conference, buys a subscription, and tells the team to use it. Three months later, adoption is at 10% and the subscription gets quietly cancelled.
The fix is dead simple: talk to your people first. Not in a group meeting. Not in a Slack poll. One-on-one, 30 minutes each, with a structured set of questions. That's the audit.
An AI audit isn't a technology assessment. It's a people assessment. You're mapping how humans actually spend their time - then figuring out which chunks of that time a machine could handle better, faster, or cheaper.
The 30-minute interview that reveals everything
Here's the thing most people get wrong about AI audits: they try to do them in a group setting. A team meeting where someone asks, "So, where could we use AI?"
That never works. And the reason is human nature.
People will not admit inefficiencies in front of their boss. They won't say "I spend 4 hours a week copying data between spreadsheets" when their manager is sitting right there. They'll nod along and say everything's fine.
The fix is individual interviews. 30 minutes, one-on-one, with each person on the team. No managers in the room. No judgment. Just genuine curiosity about how they spend their day.
I schedule these back to back across a single day or two. It's intense, but the density matters - you start seeing patterns by interview three that you'd miss if you spread them out over weeks.
The format is simple: a video call (or in person if you can), screen share encouraged, and permission to be brutally honest about what sucks. Most people light up when you give them a safe space to complain about the tedious parts of their job.
What questions to ask (and why order matters)
The order of questions matters more than the questions themselves. You're building trust and momentum. Start broad, go specific, then let them dream.
Opening (5 mins): "Walk me through a typical day. What's the first thing you do when you open your laptop?" This gets them talking without pressure. You're just listening.
Pain points (10 mins): "What do you hate doing? What takes way longer than it should? Where do you feel like you're doing work a computer should be doing?" These are the gold questions. Let them rant. Take notes on everything.
Time mapping (10 mins): "How many hours a week do you spend on [the thing they just complained about]? What would you do with that time if you got it back?" This is where you start quantifying. Rough numbers are fine - you're not auditing their timesheet, you're getting ballpark estimates.
The dream question (5 mins): "If you could wave a magic wand and have any part of your job automated perfectly, what would it be?" This is where the magic happens.
One person I was interviewing said something like, "Oh, I don't think that's possible, but if it were, I'd want X, Y, and Z." The thing is - with AI, it was absolutely possible. And it was actually a pretty easy build. But she'd written it off because her mental model of what technology can do was stuck in 2023.
That's the single biggest insight from running these audits: people self-filter. They don't ask for what they actually need because they assume it's impossible. Your job as the auditor is to hear the problem behind the self-censorship.
Mapping pain points to automation opportunities
After you've done all the interviews, you'll have a messy pile of notes. Time to turn them into a map.
I use a simple grid. Four columns:
- Pain point - what the person described
- Who feels it - which roles/people mentioned it
- Hours per week - estimated time wasted across the team
- Automation difficulty - easy, medium, or hard
The patterns jump out fast. You'll notice the same pain point showing up across multiple interviews. When three people independently complain about the same thing, that's your highest-priority target.
The bottlenecks are almost always in the busiest people. The head of marketing doing everything alone. The ops person manually reconciling data across three platforms. The founder still doing CFO-level financial work by hand.
These are the people who would benefit most from AI automation. They're also, paradoxically, the people who have the least time to figure it out themselves. They're too busy being busy to stop being busy. That's exactly why an external audit works - someone else does the thinking for them.
Here's the contrarian take that most AI consultants get wrong: you don't always need to uncover problems. So often, people already have great ideas about what they want automated. They've been thinking about it for months. They just needed someone to listen, validate the idea, and say "yes, that's buildable."
An audit isn't always about discovery. Sometimes it's about permission.
INTERACTIVE TOOL
AI Audit Pain Point Mapper
Fill in your team's pain points to see the total hours and cost at stake. Start with the examples or clear them and add your own.
Total hours/week at risk
10
Across all pain points
Annual cost of inaction
$26K
At $55/hr x 48 weeks
Quick wins (easy items)
10 hrs/wk
Automatable with off-the-shelf tools
Want a professional audit? Get in touch for a structured AI audit of your team.
Keep reading
Get the best of batko.ai on AI & Automation - straight to your inbox
Free. Unsubscribe anytime.
AI BUILDERS
Want to learn to build AI automations yourself?
AI Builders is a 5-week cohort for non-technical founders who want to build real AI tools. Small group, live sessions, real projects.
Join the waitlist→The ROI calculation that sells itself
Once you've mapped the pain points, you need to put dollar figures on them. This is what turns a wish list into a business case.
The formula is simpler than you'd think:
Cost of the problem = (Hours per week x Hourly salary cost) x 48 weeks
That's your baseline. If someone earning $120K is spending 8 hours a week on manual data entry, that's roughly $23,000 a year in salary going to work a machine could do.
But the salary cost is only half the equation. The other half is harder to quantify but often more valuable: what would that person do with the freed-up time?
If your head of marketing gets back 10 hours a week, they're not going to sit idle. They'll run more campaigns, test more channels, build more content. The ROI isn't just the salary saved - it's the revenue generated by redirecting that time to high-value work.
I calculate both numbers for every automation opportunity: - Direct savings: salary cost of the time eliminated - Revenue upside: estimated impact of redirecting that time to growth activities
Some opportunities are pure efficiency plays - saving $20K a year on manual work. Others are growth plays - freeing up a salesperson to make 30 more calls a week, which might drive $200K in pipeline.
The growth plays almost always have a bigger ROI than the efficiency plays. But most people only think about the cost savings. That's a mistake. When you present both numbers to a founder, the decision to invest in automation becomes obvious.
Build vs buy: when to use off-the-shelf tools vs custom agents
Not every automation needs to be custom-built. That's the second mistake I see: people hear "AI audit" and think it means building custom software. Sometimes the answer is a $30/month Zapier workflow.
Here's my framework for deciding:
Buy off-the-shelf when: - The problem is generic (email drafting, meeting notes, basic data entry) - An existing tool solves 80%+ of the need - The team can set it up themselves with a bit of guidance - Speed matters more than precision
Build custom when: - The workflow is specific to your business - You need it to integrate with your existing systems - The data is proprietary or sensitive - The 80% solution isn't good enough (regulatory, financial, or customer-facing use cases)
The honest truth: about 60% of automation opportunities I find in audits can be solved with existing tools. ChatGPT, Claude, Zapier, Make, Notion AI - the ecosystem is mature enough now that most common workflows have a tool already.
The other 40% need something custom. These are usually the high-value ones - the workflows so specific to your business that no generic tool covers them. Custom AI agents that pull from your CRM, generate reports in your format, or handle your specific compliance requirements.
My recommendation after every audit is a mix. Quick wins first (existing tools, implemented this week), then custom builds for the high-ROI opportunities that justify the investment.
What happens after the audit
The audit is not the end. It's the starting line.
What I deliver after the interviews is a document that covers three things:
- The pain point map - every automation opportunity ranked by ROI and difficulty
- The recommendation - buy vs build for each, with specific tools or build specs
- The priority order - what to do first, second, third
The priority order matters most. Founders love to jump to the biggest, hardest thing. I push them toward the quick wins first. Build momentum before you build complexity.
If someone on the team gets back 5 hours a week from a simple automation in week one, they become an evangelist for the whole programme. That internal champion is worth more than any business case.
The other thing I always emphasise: AI capability is moving so fast that what's "hard" today might be "easy" in three months. Some items on the audit list are better parked and revisited later. The cost of waiting on those is low. The cost of building something clunky that gets obsoleted in a quarter is high.
So the audit document isn't static. It's a living roadmap that should be revisited every quarter as the tools evolve.
Sources and Further Reading
This article is licensed under CC BY-NC 4.0. Share freely with attribution.
If you want to run an AI audit on your own team, here's the simplest version: book 30 minutes with each person individually, ask them what they hate doing, and write down the hours. You'll be shocked what surfaces.
If you want someone to do it for you - someone who's done dozens of these and can spot the patterns fast - get in touch. I run structured AI audits for teams of 5-50 and the ROI calculation usually pays for the engagement before we even start building.
AI BUILDERS
Want to learn to build AI automations yourself?
AI Builders is a 5-week cohort for non-technical founders who want to build real AI tools. Small group, live sessions, real projects.
Join the waitlist→Related articles
The AI Audit: How to Find $50K of Hidden Savings in Any Business (In 30 Minutes)→
10 min read
AI & AutomationAI Automation for Small Business - What Actually Works (And What's a Waste of Time)→
9 min read
AI AutomationI Asked My AI Chief of Staff What Else It Could Automate. It Found 12 Things.→
12 min read