How to Run Your Own AI Readiness Assessment in One Day
Most AI readiness assessments are consultant traps. Here is a one-day version you run yourself — five signals, real questions, honest scoring.
A consulting firm pitched one of my portfolio companies a “comprehensive AI readiness assessment.” Six weeks. $85,000. They’d interview stakeholders, map data flows, evaluate infrastructure, benchmark against industry peers, and deliver a 120-page report with a maturity score on a five-point scale.
I asked what would be different about their recommendations if the company scored a 2.3 versus a 3.1. The partner paused for a long time and then said, “Well, the phasing would be different.”
The phasing. Eighty-five thousand dollars for different phasing.
Here’s what I’ve learned from running and investing in companies across manufacturing, distribution, healthcare, and professional services: you don’t need a six-week assessment to know if you’re ready for AI. You need one honest day, five signals, and a willingness to score yourself without flattery.
Why Most Readiness Assessments Are Consultant Traps
The traditional AI readiness assessment is designed to do two things: justify the consulting engagement, and create enough complexity that you feel like you need more consulting to act on the results.
They’ll evaluate your “data maturity” on a spectrum from “ad hoc” to “optimized.” They’ll map your “technology ecosystem” across fourteen dimensions. They’ll assess your “organizational change capacity” using frameworks borrowed from academic papers nobody has read since 2019.
At the end, you’ll have a beautiful deck with a spider chart showing that you’re strong in “leadership vision” but weak in “data governance.” And the recommended next step—surprise—is another engagement to address your gaps.
The problem isn’t that these assessments are wrong. They’re usually directionally correct. The problem is that they’re wildly overcomplicated for what you actually need to know.
What you actually need to know is this: Do I have the raw ingredients to make AI work in my business? If yes, where do I start? If no, what’s the one or two things I need to fix first?
You can answer that in a day.
The Five Signals of AI Readiness
These five signals tell you whether your business has what it needs for AI to deliver real value. Not theoretical value. Not “we could potentially maybe” value. Real, measurable, shows-up-on-the-P&L value.
Signal 1: You Have Repetitive Workflows with Clear Inputs and Outputs
This is the foundational signal. AI needs patterns to be useful. If your business runs on purely creative, one-off decisions with no repeatable structure, AI won’t help much yet.
But almost no business is actually like that. Even the most “creative” operations have repetitive scaffolding underneath. The law firm that does unique litigation work still has repetitive intake, billing, document management, and scheduling. The custom manufacturer still has repetitive quoting, purchasing, and quality documentation.
Questions to ask your ops manager:
- What tasks does your team do more than 10 times per week that follow roughly the same steps?
- Which of those tasks have a clear trigger (something initiates it) and a clear output (something gets produced)?
- How much variation exists between instances? Is it 80% the same with 20% variation, or genuinely different every time?
Scoring:
- Strong (3 points): You can identify 5+ repetitive workflows with clear inputs/outputs and less than 30% variation between instances.
- Moderate (2 points): You can identify 2-4 repetitive workflows, but the inputs and outputs aren’t always clean—there’s ambiguity about when the workflow starts or what “done” looks like.
- Weak (1 point): You struggle to identify repetitive workflows, or the ones you find have so much variation that every instance feels unique.
- Missing (0 points): Your operation is genuinely ad hoc with no repeatable patterns.
Signal 2: Your Team Complains About the Same Problems Weekly
This is the signal most assessments miss, and it’s one of the most reliable indicators of where AI will deliver fast ROI.
When your team brings up the same frustrations every week—“I spent all morning chasing down that information,” “The schedule changed again and nobody told the warehouse,” “We had another billing error because the data wasn’t updated”—they’re mapping your invisible factory for you. Those recurring complaints are workflows that are broken, manual, or both.
Questions to ask your ops manager:
- What are the top three complaints you hear from your team every week?
- Which of these complaints involve information not being where it needs to be, when it needs to be there?
- How many hours per week does your team spend on workarounds for known problems?
Scoring:
- Strong (3 points): You can immediately name 3+ recurring complaints that involve information flow, manual processes, or coordination failures. Your team loses 10+ hours per week to workarounds.
- Moderate (2 points): You can name 1-2 recurring issues, but they’re vague (“things are chaotic”) rather than specific (“invoice data doesn’t sync to the ERP”).
- Weak (1 point): Complaints exist but they’re about people, not processes. (“Steve is always behind” rather than “the scheduling process creates bottlenecks.”)
- Missing (0 points): You don’t hear complaints because you’re not close enough to the operation, or because your team has given up raising issues.
Signal 3: You Have Data Somewhere, Even If It’s Messy
You don’t need a data lake. You don’t need a data warehouse. You don’t need clean, normalized, API-accessible data flowing through a modern stack.
You need data. Somewhere. In some form. Spreadsheets count. Your ERP counts. Your CRM counts. Even paper logs count if someone can photograph them.
The myth that you need “clean data” before you can use AI has probably delayed more implementations than any other single misconception. Modern AI is remarkably good at working with messy, incomplete, inconsistently formatted data. It’s not ideal—clean data is always better—but it’s not a showstopper.
What is a showstopper is having no data at all. If your business runs entirely on verbal communication and nothing is recorded anywhere, AI has nothing to work with. But I’ve never encountered a business over $1M in revenue where that was actually true.
Questions to ask your ops manager:
- Where does data live in our business? List every system, spreadsheet, database, and paper log.
- For our most important workflows, can we pull historical data for the last 90 days?
- How many of our systems can export data in some structured format (CSV, API, database query)?
Scoring:
- Strong (3 points): You have 3+ systems with structured data, at least 90 days of history, and export capability. Data isn’t perfect, but it exists and is accessible.
- Moderate (2 points): You have data but it’s fragmented—some in the ERP, some in spreadsheets, some in email. You could assemble a picture, but it would take effort.
- Weak (1 point): Most of your operational data lives in people’s heads or in formats that are hard to access (paper, unstructured emails, verbal handoffs).
- Missing (0 points): You genuinely have no recorded operational data.
Signal 4: Someone on Your Team Is Already Using AI on Their Own
This is the adoption signal, and it’s more important than most operators realize.
If someone on your team is already using ChatGPT to draft emails, using AI to generate reports, or experimenting with AI tools for their specific function, you have something money can’t buy: an internal champion who’s already past the skepticism barrier.
That person is your accelerator. They’ve already proven to themselves that AI works. They’ve already figured out some of the rough edges. They’re ready to go further—they just need permission and support.
Questions to ask your ops manager:
- Is anyone on the team using AI tools on their own, even unofficially?
- What are they using it for? How often?
- Have they shared what they’ve learned with anyone else?
Scoring:
- Strong (3 points): Multiple people are actively using AI tools for work-related tasks. They’re sharing tips. There’s organic momentum.
- Moderate (2 points): One or two people are experimenting, but it’s isolated. They haven’t shared broadly, and others don’t know about it.
- Weak (1 point): Someone has tried ChatGPT for personal use but hasn’t applied it to work.
- Missing (0 points): Nobody on your team has touched any AI tool.
Signal 5: You Have a Clear Before-and-After You Can Measure
This is the accountability signal. Without it, you’ll never know if AI is working—and you’ll never build the internal case for expanding it.
You need to be able to say: “Today, this process takes X hours and produces Y errors. After AI, we expect it to take A hours and produce B errors.” If you can’t define X and Y with reasonable accuracy, you can’t measure improvement.
This doesn’t need to be precise to the decimal. “Our quoting process takes about 45 minutes per quote and we do 30 per week” is specific enough. “Our quoting process is slow” is not.
Questions to ask your ops manager:
- For the workflows we identified in Signal 1, how long does each one take today?
- What’s the error rate or rework rate?
- How would we know if it got 30% better? What would we measure?
Scoring:
- Strong (3 points): You can quantify time, cost, and quality metrics for your top 3 workflows. You have baseline data or can establish it within a week.
- Moderate (2 points): You have a general sense of time and cost but haven’t tracked it formally. You could establish a baseline with some effort.
- Weak (1 point): You know things are “slow” or “expensive” but can’t put numbers to it.
- Missing (0 points): You have no visibility into process performance.
How to Score Your Assessment
Add up your points across all five signals. Here’s what your total means:
12-15 Points: Ready to Deploy
You have the raw ingredients. Your workflows are repeatable, your data exists, your team has momentum, and you can measure results. Stop assessing and start building.
Your next step isn’t more evaluation. It’s picking the highest-value workflow from Signal 1, confirming it against the complaints from Signal 2, and deploying your first AI agent with human-in-the-loop oversight. You should be operational within 30 days.
8-11 Points: Ready with Prep Work
You have most of what you need, but there are gaps. The good news: those gaps are fixable in 2-4 weeks, not 6 months.
Look at where you scored lowest. If it’s Signal 3 (data), spend two weeks organizing your data sources and establishing basic accessibility. If it’s Signal 5 (measurement), spend a week baselining your top workflows before you introduce AI. If it’s Signal 4 (team adoption), find your most curious team member and give them a week to experiment with AI tools in their role.
Don’t let imperfect scores stop you from starting. A score of 8 means you can start—you just need to address the weak signal in parallel with your first implementation.
4-7 Points: Foundation Building Needed
You have the potential, but the foundation isn’t there yet. Trying to deploy AI now would be like installing a GPS in a car that doesn’t have an engine.
Focus on the fundamentals: document your top five workflows (Signal 1), start tracking basic process metrics (Signal 5), and get your operational data into accessible formats (Signal 3). This foundation work takes 30-60 days and pays off whether or not you ever deploy AI—because it makes your operation more visible and manageable.
0-3 Points: Not Yet
AI isn’t your priority right now. You have more fundamental operational challenges to address first. That’s not a criticism—it’s a recognition that trying to automate a business that doesn’t have basic workflow documentation or operational data will waste money and create frustration.
Focus on building the operational basics: documented processes, accessible data, and process metrics. Come back to this assessment in 90 days.
Running the Assessment: Your One-Day Schedule
Here’s how to do this in a single day without disrupting your operation.
8:00 AM — Signal 1 Workshop (90 minutes) Sit down with your ops manager or equivalent. Whiteboard your top 10 most time-consuming workflows. For each one, define the trigger, the steps, the output, and the approximate time per instance. Score Signal 1.
9:30 AM — Signal 2 Team Check (60 minutes) Walk the floor—or get on a call if you’re remote. Ask three to five team members: “What wastes your time every week? What problems keep coming back?” Don’t defend. Don’t explain. Just listen and write it down. Score Signal 2.
10:30 AM — Break (30 minutes)
11:00 AM — Signal 3 Data Inventory (90 minutes) List every system, spreadsheet, and data source in your business. For each one, note: what data it holds, how accessible it is (API, export, manual only), and how far back the history goes. Score Signal 3.
12:30 PM — Lunch
1:30 PM — Signal 4 Adoption Survey (60 minutes) Talk to your team individually or send a quick survey: “Are you using any AI tools? What for? How often?” You’ll be surprised by what you find. Score Signal 4.
2:30 PM — Signal 5 Baseline Check (90 minutes) For the top three workflows from Signal 1, establish rough metrics. How long does each take? How many per week? What’s the error/rework rate? What does it cost in loaded labor? Score Signal 5.
4:00 PM — Scoring and Action Plan (60 minutes) Add up your scores. Read the interpretation above. Write down your three specific next steps.
5:00 PM — Done.
You just completed in one day what a consulting firm would take six weeks to deliver. And your version is more honest, because you did it yourself with real knowledge of your operation instead of through a series of polished stakeholder interviews designed to tell the consultants what they want to hear.
The Connection to Your Invisible Factory
If you’ve read about the discovery questions in The Operator’s AI Playbook, you’ll notice these five signals map directly to them:
- Signal 1 (Repetitive workflows) connects to Discovery Question 1: What decisions do you make repeatedly?
- Signal 2 (Recurring complaints) connects to Discovery Question 2: Where does information get stuck?
- Signal 3 (Data exists) is the foundation for Discovery Question 3: What work happens after hours?—because after-hours work usually exists to process data that didn’t flow properly during the day.
- Signal 4 (Team adoption) connects to Discovery Question 4: Where do your best people spend time on your worst work?—because people who are already using AI have usually identified their own worst work.
- Signal 5 (Measurable outcomes) connects to Discovery Question 5: What tribal knowledge lives in one person’s head?—because measurable processes are the opposite of tribal knowledge.
This isn’t a coincidence. The readiness assessment and the discovery process are two views of the same operational reality. The assessment tells you if you’re ready. The discovery questions tell you where to start.
What This Assessment Won’t Tell You
Let me be honest about the limits.
This assessment tells you whether the raw ingredients exist. It doesn’t tell you which specific AI platform to buy. It doesn’t tell you how to integrate with your particular ERP system. It doesn’t tell you how to manage the change process with a specific team dynamic.
Those are implementation questions, and they matter—but they’re the wrong questions to ask first. The right first question is: “Am I ready?” You now have the answer.
If your score says you’re ready, the next step is to map AI to your specific operation. The Operator’s AI Playbook covers the complete framework—from the discovery questions through the 11 functional primitives, scoring methodology, implementation phases, and the people strategy that makes it stick.
Don’t let an $85,000 assessment be the reason you wait another quarter. You have everything you need to answer the readiness question yourself. Today. In one day. With honest answers and a willingness to score yourself without flattery.
The only readiness assessment that matters is the one that ends with action.
