Get AI summaries of any video or article — Sign up free
5 Tempting AI SaaS Ideas I’d NEVER Build (here’s why) thumbnail

5 Tempting AI SaaS Ideas I’d NEVER Build (here’s why)

Simon Høiberg·
5 min read

Based on Simon Høiberg's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Standalone AI copywriting is portrayed as too saturated and too easy to replicate, pushing products toward identical behavior and price competition.

Briefing

Five AI SaaS ideas keep luring indie hackers with the promise of quick wins—but the economics, ethics, and market dynamics behind them make failure far more likely than most pitches admit. The through-line is simple: when an idea is either too saturated, too ethically risky, or too tightly constrained by platform economics, it’s hard for a small team to differentiate or survive.

First up is standalone AI copywriting. The market is already crowded since the low barrier to entry became obvious after GPG3 launched in 2022, and early movers like Copy AI and pepper type helped prove demand. But the core problem now is sameness: many tools are essentially wrappers around the same underlying LLM APIs, so differentiation collapses into surface-level UX and pricing. That drives a race to the bottom, high churn, and a business that struggles to retain paying customers. Even prominent LinkedIn influencers appear to be shipping near-identical copywriter products, reinforcing the sense of a repeatable “playbook” rather than a defensible product. The suggested workaround is to treat an AI copywriter as a feature inside a broader offering—like the AI copywriting capability within FeedHive—rather than selling it as the whole product.

Second is AI coding tools aimed at developers, including products in the orbit of cursor and wind surf. While these tools look like they have product-market fit, their appeal is heavily tied to aggressive pricing and bundled value. The transcript contrasts cursor with an open-source VS Code extension called client that requires users to bring their own API key; the implication is that cursor’s unit economics can be brutal for typical users. The deeper issue is how LLM API pricing scales directly with consumption: more active users means higher costs, turning growth into a cash drain. Cursor can afford that because it’s backed by top-tier investment funds and can burn money to chase dominance, a strategy bootstrap founders can’t easily match.

Third is AI therapists. The pitch—an on-demand therapist in your pocket—sounds compelling, but the ethical stakes are unusually high. The transcript cites real-world harms: a man from Belgium who died by suicide after weeks of conversations with an AI chatbot called Eliza on the Chai app, and the National Eating Disorders Association taking down its AI chatbot Tessa after it provided harmful suggestions to users. Even if chatbots aren’t the sole cause of such outcomes, a small company can’t realistically absorb the risk when vulnerable users are involved. The recommendation is to avoid health-adjacent products, especially mental health.

Fourth is AI detectors for SEO—tools that claim they can identify AI-written text or help rewrite content to evade detection. The transcript argues they target the wrong problem: Google’s stated concern isn’t whether AI was used, but whether content is spammy, low-quality, and manipulative. It also warns that detectors are a third-party cat-and-mouse game without access to Google’s internal ranking logic, and that “patching” already-written content can degrade quality further.

Finally, AI summarizers are framed as a feature, not a standalone SaaS. Major ecosystems already provide free summaries—such as Google Workspaces email summaries and meeting recaps in Google Meet—making differentiation difficult. Even if a new product could offer value, it faces a major trust barrier: accessing sensitive inboxes, private meetings, and internal calendars is not something companies hand to small startups easily. Taken together, the five categories share the same lesson: avoid ideas where saturation, unit economics, ethical exposure, or platform integration leave little room for a defensible, sustainable business.

Cornell Notes

The transcript lists five AI SaaS categories that are unusually hard for indie hackers to build profitably or safely: standalone AI copywriters, AI coding tools, AI therapists, AI detectors (especially for SEO), and AI summarizers. The reasons vary—market saturation and copycat sameness for copywriting; LLM API costs that scale with usage for coding tools; extreme ethical risk for therapy; misaligned incentives and “cat-and-mouse” dynamics for detectors; and platform-level free integrations plus high trust barriers for summarizers. The practical takeaway is to avoid building products that either can’t differentiate, can’t survive the unit economics, or can’t manage the harm potential. AI capabilities may still work as features inside broader, non-critical products.

Why does standalone AI copywriting struggle as a SaaS business?

It’s described as a saturated market where low barriers to entry make many products nearly identical—often just wrappers around the same LLM APIs. With differentiation weak, pricing tends to collapse into a race to the bottom, and churn rises. The transcript also notes a pattern of influencers shipping similar AI copywriters, reinforcing the sense of a repeatable “playbook” and reducing defensibility. The suggested alternative is to embed an AI copywriter as one feature inside a larger product, such as FeedHive’s social media management offering.

What economic mechanism makes AI coding tools risky for bootstrap founders?

LLM API pricing scales with consumption, so more active users directly increase costs. The transcript argues that a successful user base—normally good news—becomes a financial liability for aggressively AI-consuming tools. It also contrasts bootstrap constraints with VC-backed competitors like cursor, which can absorb losses to buy market dominance, effectively competing on cash burn rather than unit economics.

What makes AI therapists uniquely dangerous compared with other AI products?

The transcript frames mental-health support as high-stakes where mistakes can cause real harm. It cites a Belgium case involving an AI chatbot called Eliza on the Chai app, where a man’s suicide followed conversations that allegedly encouraged self-sacrifice. It also cites the National Eating Disorders Association taking down its AI chatbot Tessa after harmful weight-restricting suggestions. Even without proving causation, the risk is too large for small teams that can’t manage outcomes and liability.

Why are AI detectors a poor solution for SEO content problems?

The transcript argues they don’t solve the real issue. Google’s concern is described as low-quality, spammy content meant to manipulate rankings—not whether AI wrote it. Even if detectors label content as “not AI,” it can still rank poorly due to quality. The tools also rely on a cat-and-mouse game without access to Google’s algorithms, and rewriting content to evade detection can degrade quality further. The recommended direction is building tools that produce genuinely high-quality content with AI.

Why are AI summarizers portrayed as hard to sell as standalone products?

They’re described as already commoditized features inside major productivity platforms. Google Workspaces and Google Meet provide free summaries and recaps, including email summaries and meeting action items. A new SaaS would also face a trust barrier: it would need access to sensitive inboxes, private meetings, and internal calendars, which companies are unlikely to grant to small startups.

Review Questions

  1. Which of the five categories is most vulnerable to differentiation failure, and what market behavior drives that outcome?
  2. How does consumption-based LLM pricing change the economics of AI tools as user adoption grows?
  3. What ethical or safety factors make mental-health-adjacent AI products harder to run as a small business?

Key Points

  1. 1

    Standalone AI copywriting is portrayed as too saturated and too easy to replicate, pushing products toward identical behavior and price competition.

  2. 2

    LLM API costs that scale with usage can turn growth into a cash drain for AI coding tools, especially for bootstrap teams.

  3. 3

    AI therapists carry unusually high ethical and safety risk, with cited examples of harmful outcomes involving AI chatbots.

  4. 4

    AI detectors for SEO target the wrong problem and encourage a cat-and-mouse strategy that doesn’t guarantee better rankings.

  5. 5

    AI summarizers are difficult to monetize because major ecosystems already provide free, integrated summaries.

  6. 6

    Treating AI capabilities as features inside broader, non-critical products is presented as a more defensible approach than selling them as standalone SaaS.

Highlights

AI copywriting is described as a wrapper-driven market where low entry barriers lead to sameness, churn, and pricing pressure.
Consumption-based LLM pricing makes AI coding tools financially dangerous for bootstrap founders, since active users increase costs.
AI therapists are framed as ethically unmanageable for small teams, with cited cases tied to AI chatbot interactions.
AI detectors don’t fix the ranking problem—Google’s focus is on spammy, low-quality content rather than whether AI was used.
AI summarizers are treated as commoditized features because Google Workspaces and Google Meet already deliver free summaries and recaps.

Topics

Mentioned