Skip to content
All articles
AI ConsultingAI ReadinessBuyer GuideFounder Guide

AI Readiness Assessment: The 12-Point Checklist Before You Spend $50k on AI

Before you sign a six-figure AI contract, run your team through this 12-point readiness check. The data, the team, the use-case, the budget, the success metric — the things that decide whether AI works for you or burns $50k.

N

Najeebullah

Founder, Paisol Technology

May 11, 2026 12 min read

Most failed AI projects fail before they start. The team wasn't ready, the data wasn't ready, the use-case wasn't ready — and nobody noticed until six figures had been spent. This 12-point checklist is the same one we walk every AI consulting client through on the first call. Score < 7 and the right move is almost always wait, not build.

At Paisol Technology we've shipped 500+ AI projects — and we've declinedroughly 1 in 5 inbound engagements because the team wasn't ready yet. The pattern is always the same: the founder is excited, the boardroom is hungry for an AI story, and the signals on the ground say "not yet." The checklist below is how we tell the difference quickly — and why Northwood Insurance saved $320k by failing this assessment honestly before signing a $400k contract.

How to use this checklist

Score each of the 12 items below as Yes (1), Partial (0.5), or No (0). Add up your score:

  • 10–12: Ready. Move ahead with a production build. Pick the right partner.
  • 7–9.5: Mostly ready. Start with a bounded pilot, not a full build. Re-score in 90 days.
  • 4–6.5: Not yet. Invest in fundamentals first — usually data, use-case clarity, or executive alignment. A 2-day strategy workshop ($5k–$15k) is the typical right next step.
  • 0–3.5: Don't build. The fundamentals aren't there. Building now wastes money and burns political capital. Fix the basics first.

The 12-point checklist

Section A — The use-case (items 1–3)

1. You can describe the AI project in one sentence with specific verbs and numbers

Pass: "Auto-resolve order-status questions and process refunds under $200, with a target of 50% containment rate by week 12." Fail: "Use AI to improve customer experience."

If you can't write the one-sentence brief yourself, you're not ready. This is the single biggest predictor of project success in our data — and it's the cheapest one to fix (one whiteboard session usually solves it).

2. You have a baseline number to beat

AI projects without baseline metrics turn into "we shipped something, but did it move anything?" debates 6 months in. Before you build, document:

  • Current process metric: tickets/day handled by humans, hours per report, errors per 1,000 transactions
  • Current cost: headcount × salary × time
  • Target after AI: a specific number with a specific deadline

If you can't cite the current number from a system of record (not vibes), invest 1–2 weeks measuring it before the AI conversation continues.

3. The use-case has a closed feedback loop

AI systems improve through feedback. A "closed loop" means: the system takes an action, the action's outcome is observable, and the outcome can flow back into training or evaluation. Customer support, sales qualification, document extraction, fraud scoring — closed loops. "Improving creativity," "driving innovation," "unlocking insights" — open loops, almost impossible to measure or improve.

If your use-case is an open loop, the build can still work — but it'll be harder to prove ROI and harder to maintain. Score Partial unless the loop is obviously closed.

Section B — The data (items 4–6)

4. You have at least 3 months of clean usable data

AI agents need historical examples: past tickets to learn ticket classification, past invoices to learn extraction, past conversations to learn tone. "Three months clean" means:

  • Stored in a system of record you can query (not in 4 people's Gmail)
  • Roughly consistent schema across the period
  • Labelled or annotatable — you can describe what "right" looks like

If your data is in 6 systems that don't talk to each other, your AI project is actually a data project. Do that first.

5. You have legal clarity on using the data

GDPR, HIPAA, PCI, SOC 2, employment-data rules — every regulated industry has constraints on what data can be used to train or prompt AI. Before the build:

  • Is the data in your own infrastructure, not a third-party vendor's?
  • Do customer contracts permit AI usage?
  • If you process EU personal data, is the model deployment EU-region compliant?
  • If healthcare: is the deployment HIPAA-eligible (BAA in place)?

Most early-stage teams haven't thought about this. It's the single biggest late-stage budget killer when ignored.

6. The data quality is documented and acceptable

Bad data in, bad agent out. Spot-check 100 records of your training data:

  • What % is duplicate, malformed, or wrong?
  • What % is missing critical fields?
  • What does "right" look like — and can a human agree on it?

If > 30% of records are noise, you have a data-cleaning project before you have an AI project. Budget for it explicitly. See our LLM fine-tuning vs RAG guide for how data quality changes the technical approach.

Section C — The team (items 7–9)

7. You have an internal owner who will live with the system

Every successful AI deployment we've shipped has one. The owner is a real human (not a committee) who is accountable for the agent's performance after launch. They'll watch the dashboards, triage misclassifications, review eval failures, and decide when to retrain.

No owner = abandoned system 6 months in. We've seen it dozens of times.

8. Your executive sponsor is on board for 12 months — not 12 weeks

AI projects take 8–14 weeks to ship, then 6–12 weeks to tune to production accuracy. The executive who approves the budget needs to still care 6 months in. If your sponsor's attention will move to the next quarterly OKR, the agent will quietly stop working — and nobody will notice.

9. You have someone who can review the agent's output

Especially for the first 90 days. A subject-matter expert who can label outputs as right/wrong, suggest tone tweaks, and flag edge cases. Without them, the eval set is guesswork.

This is one of the most under-budgeted line items. Expect to invest 5–10 hours/week of SME time during the build phase.

Section D — The budget & timeline (items 10–12)

10. You have $15k+ for a real build (or honesty about a pilot)

Production AI builds at decent quality start at ~$15k for a starter agent and scale to $150k+ for multi-agent systems. See our cost guide for tier-by-tier ranges. If your budget is < $10k, the honest move is a 2-week pilot — not a production build.

11. You have a real deadline, not an aspirational one

Real deadline: a board demo, a customer contract clause, a regulatory deadline, a conference launch. Aspirational deadline: "sometime this year would be nice."

Projects with real deadlines ship at 2× the rate of projects with aspirational ones — because trade-offs get made. If you don't have a real deadline, manufacture one before you kick off.

12. You have appetite for measured iteration

AI agents at v1 won't hit your target metric. They'll hit 60–70% of it. The path to 90%+ is 6–12 weeks of measured iteration — eval failures triaged, prompts tuned, tools added, edge cases handled. If your team expects v1 to be perfect, expect disappointment. If your team expects iterative improvement on a real curve, expect production success.

What to do at each score band

Score 10–12: Ready — pick the right partner

You're in the top quartile of AI buyers. Most projects you start at this level ship to production. Use our 7-step hiring framework to pick the right team, lock in fixed price, and start.

Score 7–9.5: Mostly ready — start with a bounded pilot

Don't commit to a full production build yet. Start with a 4–6-week paid pilot that de-risks the highest-uncertainty item on your scorecard. Common pilots:

  • Build an eval harness against the actual data — proves data quality (item 6)
  • Ship a working prototype on 20% of the use-case — proves the closed loop (item 3)
  • Run a 2-week shadow trial of an off-the-shelf vendor — proves you need custom at all

Re-score in 90 days. If you're now > 10, commit to the full build.

Score 4–6.5: Not yet — invest in fundamentals

Most teams at this score think they're close. They're not. The right next step is a 2-day AI strategy workshop ($5k–$15k) — a written audit of where you are, what's missing, and the 90-day plan to close the gaps. Common findings:

  • You need 3 months of data consolidation before the agent build
  • Your use-case is actually two use-cases — pick one
  • You're missing the SME — staffing decision first, build decision later
  • The right answer is buy, not build (see Northwood)

See Northwood Insurance — they scored 4 on this checklist before our workshop. We told them not to build. They saved $320k. They came back the next year scoring 10 and shipped a much smaller, targeted system.

Score 0–3.5: Don't build

The fundamentals aren't there. Building now means a $30k–$200k learning experience that produces no usable system. The political fallout makes the next AI project at your company harder, not easier.

Spend the AI budget on the fundamentals instead — data consolidation, hiring a senior engineering manager, defining the use-case in writing. Re-run this assessment in 6 months.

The 4 most common reasons teams overscore themselves

1. They confuse "data exists" with "data is usable"

Yes, the data is somewhere. No, you can't query it. No, it's not labeled. Yes, there are 4 incompatible schemas across 3 vendor systems. Item 4 should be scored honestly, not aspirationally.

2. They confuse executive enthusiasm with executive commitment

The CEO is excited about AI today. Will they still be in 9 months? Item 8 is about durability, not enthusiasm. Score it on the executive's track record, not their current speech.

3. They forget the SME staffing problem

Item 9 fails more than any other item. Founders think they'll allocate the SME time from their existing team. The SME's manager disagrees in week 3. Project stalls. Build the SME staffing into the project plan from Day 1.

4. They underbudget by 40%

The build is half the cost. Operating the system in production — observability, eval maintenance, retraining, edge-case handling — is the other half. Founders consistently budget the first half and forget the second. Item 10 should include both.

What a real AI readiness assessment looks like

The checklist above is the 10-minute version. Our 2-day AI strategy workshop is the 16-hour version: we go through the same 12 items, but with whiteboard sessions, data spot-checks, SME interviews, and a written report. Common deliverables:

  • A scored readiness assessment (one page)
  • A prioritized list of 3 high-ROI AI use-cases, with dollar estimates
  • The 90-day "path to ready" if you're not yet
  • A build-vs-buy recommendation for each use-case
  • A fixed-price quote for the recommended build (if any)

Cost: $5k for a 1-day virtual workshop, $12k for a 2-day on-site. Most workshops save the client at least 5× the workshop fee — see Northwood Insurance ($320k saved).

The bottom line

The hardest part of an AI project is knowing whether to start it. Most failed AI projects weren't bad builds — they were good builds done at the wrong time. The 12-point checklist above won't guarantee success, but it will catch ~80% of the projects that shouldn't start yet.

If you're scoring yourself somewhere in the middle, that's where consulting actually earns its fee — outside-in, honest pattern-matching against the 500+ other teams we've seen at the same checkpoint. See when to hire a consultant vs a developer for the sequencing.

Want a walkthrough?

Book a free 30-minute strategy call and we'll run the 12-point assessment against your specific project — live, on screen, no slides. You'll leave with a clear score and a written plan for the next 30 days. Or take our free AI Opportunity Audit for a structured 1-page report. If you're ready to build, here's our AI agent development service.

Ready to ship?

Book a free 30-minute strategy call.

No pitch. Walk away with a clear scope and fixed-price quote — even if you don't hire us.

Book My Strategy Call →