A week-by-week action plan for implementing your first AI tool without disrupting your practice. Check off tasks as you go — your progress saves automatically.
90Day Plan
12Weekly Milestones
48Action Items
3Phases
🎯
How to Use This Checklist
Work through each week sequentially. Don't skip Phase 1 — it's the foundation everything else builds on. The most common reason AI implementation fails is insufficient baseline measurement. Before any vendor signs a contract, you need your "before" numbers. Items marked CRITICAL are non-negotiable. Items marked HIGH significantly impact ROI. Everything else is best practice. Click any week to expand or collapse. Your checkmarks save in your browser.
1
Phase 1: Foundation
Establish baselines, select vendor, execute BAA, and train your team
Days 1–30
Week 1Baseline Measurement SprintFocus: Data Collection▼
📊 Pull These Reports From Your PMS
No-show rate report (last 6 months)No-shows + late cancellations ÷ total scheduled appointments. This is your most valuable baseline metric.
CRITICAL
Restorative procedures per patient examTotal restorative procedures ÷ total exams (last 90 days). This measures your diagnostic yield before AI.
CRITICAL
Case acceptance rate for restorative treatmentTreatment plans presented vs. scheduled. Ask your PMS support team if you can't find this report.
CRITICAL
Insurance claim denial rateDenied claims ÷ total claims submitted (last 90 days). Break down by payer and procedure code if possible.
HIGH
Average days in accounts receivableWeighted average by balance. From your AR aging report.
HIGH
Overdue recare patient countPatients 90+ days past their recall due date. This is your reactivation opportunity inventory.
HIGH
Open treatment plan dollar valueTotal value of diagnosed but unscheduled treatment. This is warm revenue waiting to be recovered.
HIGH
💾 Document & Save
Save all baseline reports to a shared folder (labelled "Pre-AI Baseline – [Date]")You'll need these in 90 days. Don't lose them.
CRITICAL
Enter your numbers into the ROI Calculator (included in Starter Kit)This gives you your personalized ROI projection and helps prioritize which tool to pilot first.
HIGH
Week 2Vendor EvaluationFocus: Due Diligence▼
📋 Before Each Demo
Shortlist 2–3 vendors for your first tool categoryUse the Comparison Matrix from your Starter Kit. Don't evaluate more than 3 simultaneously.
HIGH
Request demos in your actual PMS environment (not vendor sandbox)Email: "We'd like to see this tool running live in [your PMS name]. Can we schedule a demo with our system connected?"
CRITICAL
Schedule demos with your clinical lead and office manager presentThe people who will use the tool daily must evaluate it. Their veto matters.
HIGH
❓ Ask Every Vendor These 5 Questions
Q1: What's your training dataset size and demographic composition?Look for millions of images and diversity across equipment brands and patient demographics.
HIGH
Q2: What peer-reviewed studies support your accuracy claims?Ask for the actual papers — not a marketing summary. Note who funded the research.
HIGH
Q3: What is your false positive rate? (Specificity, not just sensitivity)A tool that flags everything is useless. Demand specificity data.
HIGH
Q4: Can you provide reference contacts at practices running our same PMS?Call them. Ask specifically what didn't work in the first 60 days and how it was resolved.
HIGH
Q5: What are the full BAA terms and data handling policies?Specifically ask: do you use our patient images to train future models? What is your data retention policy?
Execute BAA with selected vendor before any patient data is sharedNon-negotiable for HIPAA compliance. If vendor delays or resists, walk away.
CRITICAL
Review and sign pilot/trial agreement (prefer month-to-month initially)Do not sign an annual contract for a tool you haven't piloted with real cases.
CRITICAL
⚙️ Technical Setup
Schedule vendor-assisted installation and integration testingBlock 2–3 hours for this. Test the integration with a real patient scenario before going live.
CRITICAL
Identify your internal championOne person is accountable for this implementation. They're the point of contact with the vendor and the internal advocate. Choose carefully.
HIGH
Test with 5–10 historical cases before live patient useFor diagnostic AI: upload historical X-rays and review what the AI flags. Compare to what was documented at the time.
HIGH
Week 4Team Training & Go-LiveFocus: Adoption▼
👥 Staff Training
All-staff training session (vendor-led, 60–90 min)Everyone who touches the workflow needs to be in this room. No exceptions.
CRITICAL
Address the "replacement" question directly with your teamBe explicit: AI handles routine tasks so staff can focus on judgment-intensive work. Acknowledge the concern; don't dismiss it.
HIGH
Create a quick-reference cheat sheet for each roleOne page. What does the tool do for me? What do I do differently? What do I do when something looks wrong?
NORMAL
🚀 Go-Live
Set a 30-day no-judgment rule: run the pilot before evaluatingCommit to the full pilot before any team member declares "this isn't working." New workflows feel clunky for 2–3 weeks. That's normal.
HIGH
Schedule daily 5-minute check-ins with champion for first 2 weeksWhat questions came up? What friction appeared? Log everything for the Month 1 review.
NORMAL
🏁
Phase 1 Complete — Week 4 Milestone
You should now have: baseline metrics documented, BAA signed, tool installed and integrated, all staff trained, and your first real patient data running through the system. If any of these are missing, do not proceed to Phase 2.
2
Phase 2: Optimization
Analyze early results, fix friction points, and expand patient-facing use
Days 31–60
Week 5First Data ReviewFocus: Early Signal▼
📈 Analyze Early Data
Pull same baseline metrics from Week 1 for comparisonDon't draw conclusions yet — 1 week isn't statistically meaningful. This is calibration, not evaluation.
HIGH
Contact vendor support for any unexpected behaviorLog every anomaly. If AI is flagging too aggressively or missing things your team catches, that's data for the vendor.
HIGH
Survey your team: what's working, what's friction?Anonymous 5-question form is fine. You need honest feedback, not diplomatically filtered answers.
NORMAL
Week 6Workflow RefinementFocus: Remove Friction▼
🔧 Fix the Friction Points
Address every friction point identified in team surveyMost friction comes from missing protocol (what do we do when AI disagrees with clinician?), not from the tool itself. Write the protocol.
CRITICAL
Customize message templates if using communication AIGeneric "Hi [Name], your appointment is coming up" underperforms. Rewrite to sound like your practice. This takes 2 hours; it's worth it.
HIGH
Review and tune AI sensitivity settings if applicableSome diagnostic AI tools let you adjust flagging sensitivity. Too many false positives → tune down. Too many misses → tune up. Work with your vendor.
For diagnostic AI: begin using AI overlays in patient treatment conversationsStart with cases where you were already planning to recommend restorative treatment. Track acceptance rate on AI-supported conversations vs. prior rate.
HIGH
Coach your team on patient-facing language for AIScript: "Our AI assistant reviews every X-ray and flags areas for closer attention." DO NOT say "the AI diagnosed you" or "the AI says you have a cavity."
HIGH
For communication AI: launch first reactivation campaign (incomplete treatment list)Target patients with open treatment plan items first — warmest leads. Expect 15–25% conversion.
HIGH
Week 830-Day Pilot ReviewFocus: Go/No-Go Decision▼
📊 Formal Review Meeting
Pull all metrics and compare to Week 1 baselineEvery metric from your baseline checklist. Document the delta — even if it's negative or flat (that's data too).
CRITICAL
Hold team meeting: present results, gather feedbackTransparency builds trust. Share the numbers. If it's working, celebrate it. If not, name what you're going to do differently.
HIGH
Decision: Continue / Extend pilot / Change toolsBe honest. A flat first month is usually a workflow issue, not a tool issue. An actively worse month is a tool or integration issue — escalate with vendor immediately.
CRITICAL
If continuing: sign annual contract and plan Tool #2Lock in annual pricing (typically 15–20% discount). Identify your second highest-ROI opportunity from the comparison matrix.
NORMAL
📊
Phase 2 Complete — Day 60 Milestone
By now you should have: 30-day baseline comparison data, all friction points addressed, AI being used in patient conversations, and a clear go/no-go decision made. If metrics are improving, you're on track. Even 50% of projected ROI at this point is a win — the tool compounds over time.
3
Phase 3: Expansion
Launch advanced features, add Tool #2, systematize everything, and document your 90-day ROI
Days 61–90
Week 9Plan Tool #2 RolloutFocus: Stack Building▼
🔭 Expansion Planning
Based on 30-day data: identify your second-highest revenue leakIf diagnostic AI is working, add communication AI. If communication AI is working, add billing AI. Follow the ROI math, not vendor recommendations.
HIGH
Begin vendor evaluation for Tool #2 (repeat Week 2 process)Your team now has real experience with AI implementations. They'll be better evaluators the second time.
NORMAL
Week 10Launch Advanced FeaturesFocus: Deeper ROI▼
🚀 Unlock What You Haven't Used Yet
Communication AI: activate predictive no-show scoringReview your daily high-risk flagged appointments. Within 30 days you'll develop intuition for your practice's specific patterns.
HIGH
Communication AI: launch segmented overdue recare campaignSegment: 6–12 months overdue (warm), 12–24 months (cooling), 24+ months (cold). Each gets different messaging.
HIGH
Billing AI: run first denial pattern analysis reportLook for patterns by payer, by provider, by procedure code. Your highest-denial payer gets a workflow audit.
HIGH
Diagnostic AI: enable automated pre-auth narrative generation (if available)Set up a review protocol: AI generates first draft, clinician approves and edits before submission. Do not auto-submit without review.
Document AI protocols into your standard operating proceduresWho monitors the communication AI inbox? What triggers human escalation? What's the monthly review cadence? Write it. Staff turnover will happen.
CRITICAL
Create AI onboarding guide for new hiresIf you hired someone tomorrow, how would they learn the AI workflow? Document that. It reduces your training burden and ensures consistency.
HIGH
Set monthly metric review date (recurring calendar event)Pick a day — first Monday of the month, third Friday, whatever works. Put it in everyone's calendar. This is your practice data rhythm.
HIGH
Week 1290-Day Business ReviewFocus: ROI Proof▼
🎯 The 90-Day Report
Pull all metrics from Week 1 baseline for final comparisonEvery number from the baseline report. Calculate the delta. Calculate the dollar value of the delta.
CRITICAL
Calculate net ROI: (revenue improvement – tool cost) for 90 daysBe conservative. If you're uncertain whether a restorative case was driven by AI, don't count it. The honest number is what you can defend.
CRITICAL
Present 90-day results to your partners / practice managerShare the wins. Share what didn't work. This is the data that builds your AI culture — and makes future tool decisions easier to get approved.
HIGH
Celebrate with your teamThey did the work. The pilot worked because they adopted it, not because the tool was great. Recognize that explicitly.
NORMAL
Plan your 12-month AI roadmapWhat's Tool #3? What advanced features are you not yet using? What does your ideal AI stack look like in a year? Now you have the data to plan it intelligently.
NORMAL
🎉
90-Day Implementation Complete
You've done what most practices never do: implemented AI with rigor, measured results honestly, and built the internal systems to sustain it. The practices with the strongest AI adoption in 3 years will be the ones who did exactly what you just did — systematically, with real data, and with their team on board from the start.