Get AI summaries of any video or article — Sign up free
Answer These 8 Questions Before You Implement AI thumbnail

Answer These 8 Questions Before You Implement AI

Tiago Forte·
5 min read

Based on Tiago Forte's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Define the AI target as a business outcome and identify the bottleneck AI should remove before choosing tools.

Briefing

A readiness assessment for AI adoption is the difference between using AI to improve a business and burning $50,000 on tools that make operations worse. The core message: AI projects fail most often when companies start with tools instead of outcomes—skipping the questions that determine what to optimize, which parts of the business can actually be leveraged, and how to measure success.

The first step is defining the business outcome AI should drive. Instead of asking “Which AI tool should we try?”, the assessment asks what the company is optimizing for and what bottleneck is blocking that goal. Examples include scaling without constant founder involvement, reducing work that leaders dislike (like accountability and follow-through), or supporting growth and even exit-related objectives. AI is framed as a “genius coach,” but only if it receives the right context; the assessment treats context as the limiting factor, not the model’s capability.

From there, the assessment forces a reality check on where AI can create leverage. The discussion breaks the business into a customer value chain: market-to-lead (discoverability and permission to contact), lead-to-sale (turning contacts into transactions), sale-to-delivery (handoff and onboarding), delivery-to-success (fulfillment that produces outcomes), and then the loop back to market via success-to-lead (reviews, referrals, and repeat demand). AI can heavily automate or improve the earlier, more digital segments—especially customer acquisition and remote fulfillment—while the “delivery-to-success” portion is harder when it depends on in-person work. Even in technician-heavy industries like plumbing, AI can’t replace the physical job, but it can optimize pricing, customer service, dispatch, and enable functions like finance and hiring workflows.

A key prerequisite is process maturity. AI can only improve what’s documented: checklists, flowcharts, and SOPs beat “tribal knowledge.” The assessment also offers a practical workaround—AI can help generate documentation quickly (turning months of whiteboard-and-workshop effort into minutes), but teams still need to validate, teach, and measure what gets produced.

The framework then grades implementation difficulty in three tiers: simple prompting inside an LLM, no-code automation using tools like Make, Zapier, and similar workflows, and custom integrations that require APIs such as OpenAI’s or Slack’s. It also warns that “agents” are often overhyped; much of the value can come from prompting alone, while agent integrations can take months and remain fragile.

Finally, the assessment insists on ROI logic and measurement. Some activities scale in frequency without diminishing returns (like outbound outreach), while others hit practical limits (like sending newsletters or publishing content). Success metrics must be tied to the specific process being improved, along with updated standards and cultural buy-in—because changing expectations alters roles, incentives, and day-to-day behavior. Examples include shifting newsletter work from a full-time task to a smaller slice of a broader AI-assisted content and optimization role, and using AI-driven prediction to raise fulfillment rates in dry goods/clothing without requiring the same inventory costs.

The takeaway is straightforward: answer the eight readiness questions first, then implement AI where it can be leveraged, measured, and adopted—otherwise the business risks scaling the wrong work faster.

Cornell Notes

AI adoption succeeds when companies start with a readiness assessment that links business outcomes to measurable, leverageable processes. The framework begins by defining what the business is optimizing for and identifying the bottleneck AI should relieve, then maps AI leverage across the customer value chain (market-to-lead, lead-to-sale, sale-to-delivery, delivery-to-success, and the success-to-market loop). It emphasizes that AI works best on documented functions—checklists, flowcharts, and SOPs—while AI can also help generate missing documentation quickly, though teams must still validate and train it. Implementation difficulty is graded from prompting to no-code automation to custom API integrations, and success requires ROI thinking plus metrics, updated standards, and cultural acceptance.

Why does starting with AI tools often backfire, and what replaces that approach?

The assessment argues that failure usually comes from skipping the outcome-first questions. Companies that buy or experiment with tools without defining what they’re optimizing for tend to automate the wrong work. The replacement is a readiness assessment that asks what outcome AI should drive, what bottleneck blocks that outcome, and which business functions can be leveraged by AI before any implementation begins.

How does the customer value chain determine where AI will help most?

AI leverage is highest in the earlier, more digital parts of the chain: market-to-lead (discoverability and permission to contact), lead-to-sale (turning contacts into transactions), and sale-to-delivery when onboarding can be automated. Delivery-to-success is harder when it depends on in-person physical work, but AI can still support enabling steps like dispatch optimization, customer service, pricing, and finance. The loop back to market—success-to-lead via reviews and referrals—is treated as a key “virtuous cycle” area that many businesses ignore, yet it can be leveraged with AI.

What prerequisite makes a business “ready” for AI, and what if documentation is missing?

Readiness depends on having functions documented externally—checklists, flowcharts, and SOPs—rather than living only in someone’s head. If documentation is missing, AI can help create it quickly using a master prompt, potentially replacing months of manual whiteboarding and workshop work with minutes of first drafts. The remaining work is human: validating with the team, teaching the process, and setting measurement so it actually works in practice.

How should businesses decide whether an AI project is easy, medium, or hard to implement?

The framework uses three difficulty tiers: (1) prompting an LLM with the right context (no code), (2) no-code automation/workflows using tools like Make and Zapier, and (3) custom implementations requiring API integrations such as OpenAI’s API or Slack’s API. A practical method for non-experts is to ask their LLM with the business context to map the project into one of these tiers.

Why does the assessment treat “agents” as less important than prompting for most value?

The discussion warns that agents are a buzzword, but 99% of the value often comes from prompting. Agent-style integrations can take months, be fragile, and still end up less complete than expected—especially when the integration effort doesn’t match the business payoff.

How do diminishing returns and metrics shape which AI initiatives to pursue?

AI can make tasks faster, but not every activity benefits from scaling frequency endlessly. Newsletter creation may become much faster, yet sending 20–30 emails a day may not add proportional value. Outbound outreach can scale far more because markets respond over time, so AI can increase contact volume without the same diminishing returns. Success metrics must be tied to the specific process and updated standards (for example, shifting a newsletter role from full-time writing to a smaller portion of an AI-assisted content and optimization workflow). Cultural buy-in matters because roles and incentives change.

Review Questions

  1. What outcome should AI be optimizing for in your business, and what bottleneck currently prevents that outcome?
  2. Which parts of your customer value chain are most digital and therefore most leverageable by AI, and which parts depend on in-person delivery?
  3. What documentation and measurement gaps would block AI from improving a specific function in your organization?

Key Points

  1. 1

    Define the AI target as a business outcome and identify the bottleneck AI should remove before choosing tools.

  2. 2

    Map AI leverage across the customer value chain, treating market-to-lead and lead-to-sale as high-leverage areas when work is computer-based.

  3. 3

    Document functions externally (checklists, flowcharts, SOPs); AI can generate missing documentation quickly, but teams must validate, teach, and measure it.

  4. 4

    Grade implementation difficulty into prompting, no-code workflow automation (e.g., Make, Zapier), or custom API integrations (e.g., OpenAI’s API, Slack’s API).

  5. 5

    Avoid over-investing in “agents” when prompting with the right context can deliver most of the value.

  6. 6

    Use ROI logic that accounts for diminishing returns, not just speed gains, and choose initiatives where scaling frequency actually creates value.

  7. 7

    Set success metrics and update standards, then secure cultural buy-in so role expectations and incentives align with the new AI-enabled workflow.

Highlights

AI readiness starts with outcome and bottleneck clarity; tool-first experimentation is a common path to wasted spend.
The customer value chain framework shows why acquisition and remote fulfillment are typically AI-friendly, while in-person delivery limits automation.
AI can turn undocumented processes into usable documentation fast, but human teams still must validate and operationalize it.
Implementation difficulty can be sorted into prompting, no-code automation, and custom API work—helping non-experts avoid mismatched effort.
Success depends on measurable standards and cultural acceptance, not just faster content or automated tasks.

Topics

  • AI Readiness Assessment
  • Customer Value Chain
  • Process Documentation
  • AI Implementation Tiers
  • ROI Metrics

Mentioned

  • LLM
  • SEO
  • API