How to Run Your Own AI Readiness Assessment in One Day
Most AI readiness assessments are consultant traps. Here is a one-day version you run yourself — five signals, real questions, honest scoring.
A consulting firm pitched one of my portfolio companies a “comprehensive AI readiness assessment.” Six weeks. $85,000. They’d deliver a 120-page report with a maturity score on a five-point scale.
I asked what would differ between a score of 2.3 versus 3.1. The partner paused and said, “Well, the phasing would be different.” Eighty-five thousand dollars for different phasing.
You don’t need a six-week assessment. You need one honest day, five signals, and a willingness to score yourself without flattery.
Why Most Readiness Assessments Are Traps
Traditional AI readiness assessments are designed to justify the engagement and create enough complexity that you need more consulting to act on the results.
They evaluate “data maturity” on a spectrum. They map your “technology ecosystem” across fourteen dimensions. They assess “organizational change capacity” using frameworks from academic papers nobody has read since 2019.
At the end, you get a spider chart showing you’re strong in “leadership vision” but weak in “data governance.” The recommended next step is always another engagement.
The problem isn’t that they’re wrong. They’re usually directionally correct. The problem is they’re wildly overcomplicated for what you actually need to know:
- Do I have the raw ingredients to make AI work?
- If yes, where do I start?
- If no, what’s the one or two things to fix first?
You can answer that in a day.
The Five Signals of AI Readiness
Signal 1: You Have Repetitive Workflows with Clear Inputs and Outputs
AI needs patterns. If your business runs on purely creative, one-off decisions with no repeatable structure, AI won’t help much yet.
But almost no business is actually like that. Even the most “creative” operations have repetitive scaffolding underneath — intake, billing, document management, scheduling.
Questions to ask:
- What tasks happen 10+ times per week following roughly the same steps?
- Which have a clear trigger and clear output?
- How much variation exists? 80% same with 20% variation, or genuinely different every time?
Scoring:
- Strong (3): 5+ repetitive workflows with clear inputs/outputs and less than 30% variation
- Moderate (2): 2-4 workflows, but inputs/outputs aren’t always clean
- Weak (1): Hard to identify repetitive workflows, or high variation in each instance
- Missing (0): Genuinely ad hoc operation with no repeatable patterns
Signal 2: Your Team Complains About the Same Problems Weekly
This is the signal most assessments miss, and one of the most reliable indicators of fast ROI.
Recurring complaints map your invisible factory. “I spent all morning chasing that information.” “The schedule changed and nobody told the warehouse.” Those are broken, manual workflows.
Questions to ask:
- What are the top three complaints you hear every week?
- Which involve information not being where it needs to be?
- How many hours per week go to workarounds for known problems?
Scoring:
- Strong (3): 3+ recurring complaints involving information flow or coordination failures. 10+ hours/week lost to workarounds.
- Moderate (2): 1-2 recurring issues, but vague (“things are chaotic”) rather than specific
- Weak (1): Complaints are about people, not processes
- Missing (0): You don’t hear complaints because you’re too far from the operation
Signal 3: You Have Data Somewhere, Even If It’s Messy
You don’t need a data lake or warehouse. You need data. Somewhere. Spreadsheets count. Your ERP counts. Even paper logs count if someone can photograph them.
The myth that you need “clean data” before using AI has delayed more implementations than any other misconception. Modern AI works well with messy, incomplete, inconsistently formatted data. Not ideal — but not a showstopper.
What is a showstopper: no data at all. But I’ve never encountered a business over $1M where that was actually true.
Questions to ask:
- Where does data live? List every system, spreadsheet, database, and paper log.
- Can we pull historical data for the last 90 days on our most important workflows?
- How many systems can export in structured format (CSV, API, database query)?
Scoring:
- Strong (3): 3+ systems with structured data, 90+ days history, and export capability
- Moderate (2): Data exists but is fragmented — some in ERP, some in spreadsheets, some in email
- Weak (1): Most operational data lives in people’s heads or hard-to-access formats
- Missing (0): No recorded operational data
Signal 4: Someone on Your Team Is Already Using AI
If someone is already using ChatGPT to draft emails or experimenting with AI tools, you have something money can’t buy: an internal champion past the skepticism barrier.
That person is your accelerator. They’ve proven to themselves it works and figured out some rough edges. They just need permission and support.
Questions to ask:
- Is anyone using AI tools, even unofficially?
- What for? How often?
- Have they shared what they’ve learned?
Scoring:
- Strong (3): Multiple people actively using AI for work. Organic momentum.
- Moderate (2): One or two experimenting in isolation
- Weak (1): Someone tried ChatGPT personally but hasn’t applied it to work
- Missing (0): Nobody has touched any AI tool
Signal 5: You Have a Clear Before-and-After You Can Measure
Without this, you’ll never know if AI is working and never build the case for expanding it.
You need: “Today this process takes X hours with Y errors. After AI, we expect A hours and B errors.” Doesn’t need decimal precision. “45 minutes per quote, 30 quotes per week” is specific enough. “Our quoting process is slow” is not.
Questions to ask:
- For the workflows in Signal 1, how long does each take today?
- What’s the error or rework rate?
- How would we know if it got 30% better?
Scoring:
- Strong (3): Can quantify time, cost, and quality for top 3 workflows. Baseline data exists or can be established within a week.
- Moderate (2): General sense of time and cost but no formal tracking
- Weak (1): Know things are “slow” or “expensive” but can’t put numbers to it
- Missing (0): No visibility into process performance
How to Score Your Assessment
Add up points across all five signals.
12-15 Points: Ready to Deploy
You have the raw ingredients. Stop assessing and start building. Pick the highest-value workflow from Signal 1, confirm it against Signal 2 complaints, and deploy your first AI agent with human-in-the-loop oversight. Be operational within 30 days.
8-11 Points: Ready with Prep Work
Most of what you need is there, with fixable gaps.
- Low on Signal 3 (data)? Spend two weeks organizing data sources.
- Low on Signal 5 (measurement)? Spend a week baselining top workflows.
- Low on Signal 4 (adoption)? Give your most curious team member a week to experiment.
A score of 8 means start now — address the weak signal in parallel.
4-7 Points: Foundation Building Needed
The potential exists but the foundation isn’t there. Focus on fundamentals:
- Document your top five workflows
- Start tracking basic process metrics
- Get operational data into accessible formats
This takes 30-60 days and pays off whether or not you deploy AI.
0-3 Points: Not Yet
You have more fundamental operational challenges to address first. Focus on documented processes, accessible data, and process metrics. Reassess in 90 days.
Your One-Day Schedule
8:00 AM — Signal 1 Workshop (90 min) Whiteboard your top 10 most time-consuming workflows with your ops manager. Define trigger, steps, output, and time per instance. Score Signal 1.
9:30 AM — Signal 2 Team Check (60 min) Walk the floor or get on a call. Ask 3-5 team members: “What wastes your time every week?” Don’t defend. Just listen and write it down. Score Signal 2.
10:30 AM — Break (30 min)
11:00 AM — Signal 3 Data Inventory (90 min) List every system, spreadsheet, and data source. Note what data it holds, accessibility (API, export, manual), and history depth. Score Signal 3.
12:30 PM — Lunch
1:30 PM — Signal 4 Adoption Survey (60 min) Talk to your team or send a quick survey: “Are you using any AI tools? What for? How often?” Score Signal 4.
2:30 PM — Signal 5 Baseline Check (90 min) For the top three workflows from Signal 1, establish rough metrics: time per instance, volume per week, error/rework rate, loaded labor cost. Score Signal 5.
4:00 PM — Scoring and Action Plan (60 min) Add scores. Read the interpretation. Write three specific next steps.
5:00 PM — Done.
You just completed in one day what a consulting firm would take six weeks to deliver.
Connection to the AI Playbook
If you’ve read the discovery questions in The Operator’s AI Playbook, these five signals map directly:
- Signal 1 (Repetitive workflows) -> Discovery Question 1: What decisions do you make repeatedly?
- Signal 2 (Recurring complaints) -> Discovery Question 2: Where does information get stuck?
- Signal 3 (Data exists) -> Discovery Question 3: What work happens after hours?
- Signal 4 (Team adoption) -> Discovery Question 4: Where do your best people spend time on your worst work?
- Signal 5 (Measurable outcomes) -> Discovery Question 5: What tribal knowledge lives in one person’s head?
The assessment tells you if you’re ready. The discovery questions tell you where to start.
What This Assessment Won’t Tell You
This tells you whether the raw ingredients exist. It doesn’t tell you which AI platform to buy, how to integrate with your ERP, or how to manage change with your specific team.
Those are implementation questions — important, but the wrong questions to ask first. The right first question is: “Am I ready?”
If your score says yes, the next step is mapping AI to your specific operation. The Operator’s AI Playbook covers the complete framework — discovery questions, functional primitives, scoring methodology, implementation phases, and people strategy.
Don’t let an $85,000 assessment be the reason you wait another quarter. The only readiness assessment that matters is the one that ends with action.
