Answer These 8 Questions Before You Implement AI
Based on Tiago Forte's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Define the AI target as a business outcome and identify the bottleneck AI should remove before choosing tools.
Briefing
A readiness assessment for AI adoption is the difference between using AI to improve a business and burning $50,000 on tools that make operations worse. The core message: AI projects fail most often when companies start with tools instead of outcomes—skipping the questions that determine what to optimize, which parts of the business can actually be leveraged, and how to measure success.
The first step is defining the business outcome AI should drive. Instead of asking “Which AI tool should we try?”, the assessment asks what the company is optimizing for and what bottleneck is blocking that goal. Examples include scaling without constant founder involvement, reducing work that leaders dislike (like accountability and follow-through), or supporting growth and even exit-related objectives. AI is framed as a “genius coach,” but only if it receives the right context; the assessment treats context as the limiting factor, not the model’s capability.
From there, the assessment forces a reality check on where AI can create leverage. The discussion breaks the business into a customer value chain: market-to-lead (discoverability and permission to contact), lead-to-sale (turning contacts into transactions), sale-to-delivery (handoff and onboarding), delivery-to-success (fulfillment that produces outcomes), and then the loop back to market via success-to-lead (reviews, referrals, and repeat demand). AI can heavily automate or improve the earlier, more digital segments—especially customer acquisition and remote fulfillment—while the “delivery-to-success” portion is harder when it depends on in-person work. Even in technician-heavy industries like plumbing, AI can’t replace the physical job, but it can optimize pricing, customer service, dispatch, and enable functions like finance and hiring workflows.
A key prerequisite is process maturity. AI can only improve what’s documented: checklists, flowcharts, and SOPs beat “tribal knowledge.” The assessment also offers a practical workaround—AI can help generate documentation quickly (turning months of whiteboard-and-workshop effort into minutes), but teams still need to validate, teach, and measure what gets produced.
The framework then grades implementation difficulty in three tiers: simple prompting inside an LLM, no-code automation using tools like Make, Zapier, and similar workflows, and custom integrations that require APIs such as OpenAI’s or Slack’s. It also warns that “agents” are often overhyped; much of the value can come from prompting alone, while agent integrations can take months and remain fragile.
Finally, the assessment insists on ROI logic and measurement. Some activities scale in frequency without diminishing returns (like outbound outreach), while others hit practical limits (like sending newsletters or publishing content). Success metrics must be tied to the specific process being improved, along with updated standards and cultural buy-in—because changing expectations alters roles, incentives, and day-to-day behavior. Examples include shifting newsletter work from a full-time task to a smaller slice of a broader AI-assisted content and optimization role, and using AI-driven prediction to raise fulfillment rates in dry goods/clothing without requiring the same inventory costs.
The takeaway is straightforward: answer the eight readiness questions first, then implement AI where it can be leveraged, measured, and adopted—otherwise the business risks scaling the wrong work faster.
Cornell Notes
AI adoption succeeds when companies start with a readiness assessment that links business outcomes to measurable, leverageable processes. The framework begins by defining what the business is optimizing for and identifying the bottleneck AI should relieve, then maps AI leverage across the customer value chain (market-to-lead, lead-to-sale, sale-to-delivery, delivery-to-success, and the success-to-market loop). It emphasizes that AI works best on documented functions—checklists, flowcharts, and SOPs—while AI can also help generate missing documentation quickly, though teams must still validate and train it. Implementation difficulty is graded from prompting to no-code automation to custom API integrations, and success requires ROI thinking plus metrics, updated standards, and cultural acceptance.
Why does starting with AI tools often backfire, and what replaces that approach?
How does the customer value chain determine where AI will help most?
What prerequisite makes a business “ready” for AI, and what if documentation is missing?
How should businesses decide whether an AI project is easy, medium, or hard to implement?
Why does the assessment treat “agents” as less important than prompting for most value?
How do diminishing returns and metrics shape which AI initiatives to pursue?
Review Questions
- What outcome should AI be optimizing for in your business, and what bottleneck currently prevents that outcome?
- Which parts of your customer value chain are most digital and therefore most leverageable by AI, and which parts depend on in-person delivery?
- What documentation and measurement gaps would block AI from improving a specific function in your organization?
Key Points
- 1
Define the AI target as a business outcome and identify the bottleneck AI should remove before choosing tools.
- 2
Map AI leverage across the customer value chain, treating market-to-lead and lead-to-sale as high-leverage areas when work is computer-based.
- 3
Document functions externally (checklists, flowcharts, SOPs); AI can generate missing documentation quickly, but teams must validate, teach, and measure it.
- 4
Grade implementation difficulty into prompting, no-code workflow automation (e.g., Make, Zapier), or custom API integrations (e.g., OpenAI’s API, Slack’s API).
- 5
Avoid over-investing in “agents” when prompting with the right context can deliver most of the value.
- 6
Use ROI logic that accounts for diminishing returns, not just speed gains, and choose initiatives where scaling frequency actually creates value.
- 7
Set success metrics and update standards, then secure cultural buy-in so role expectations and incentives align with the new AI-enabled workflow.