5 Tempting AI SaaS Ideas I’d NEVER Build (here’s why)
Based on Simon Høiberg's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Standalone AI copywriting is portrayed as too saturated and too easy to replicate, pushing products toward identical behavior and price competition.
Briefing
Five AI SaaS ideas keep luring indie hackers with the promise of quick wins—but the economics, ethics, and market dynamics behind them make failure far more likely than most pitches admit. The through-line is simple: when an idea is either too saturated, too ethically risky, or too tightly constrained by platform economics, it’s hard for a small team to differentiate or survive.
First up is standalone AI copywriting. The market is already crowded since the low barrier to entry became obvious after GPG3 launched in 2022, and early movers like Copy AI and pepper type helped prove demand. But the core problem now is sameness: many tools are essentially wrappers around the same underlying LLM APIs, so differentiation collapses into surface-level UX and pricing. That drives a race to the bottom, high churn, and a business that struggles to retain paying customers. Even prominent LinkedIn influencers appear to be shipping near-identical copywriter products, reinforcing the sense of a repeatable “playbook” rather than a defensible product. The suggested workaround is to treat an AI copywriter as a feature inside a broader offering—like the AI copywriting capability within FeedHive—rather than selling it as the whole product.
Second is AI coding tools aimed at developers, including products in the orbit of cursor and wind surf. While these tools look like they have product-market fit, their appeal is heavily tied to aggressive pricing and bundled value. The transcript contrasts cursor with an open-source VS Code extension called client that requires users to bring their own API key; the implication is that cursor’s unit economics can be brutal for typical users. The deeper issue is how LLM API pricing scales directly with consumption: more active users means higher costs, turning growth into a cash drain. Cursor can afford that because it’s backed by top-tier investment funds and can burn money to chase dominance, a strategy bootstrap founders can’t easily match.
Third is AI therapists. The pitch—an on-demand therapist in your pocket—sounds compelling, but the ethical stakes are unusually high. The transcript cites real-world harms: a man from Belgium who died by suicide after weeks of conversations with an AI chatbot called Eliza on the Chai app, and the National Eating Disorders Association taking down its AI chatbot Tessa after it provided harmful suggestions to users. Even if chatbots aren’t the sole cause of such outcomes, a small company can’t realistically absorb the risk when vulnerable users are involved. The recommendation is to avoid health-adjacent products, especially mental health.
Fourth is AI detectors for SEO—tools that claim they can identify AI-written text or help rewrite content to evade detection. The transcript argues they target the wrong problem: Google’s stated concern isn’t whether AI was used, but whether content is spammy, low-quality, and manipulative. It also warns that detectors are a third-party cat-and-mouse game without access to Google’s internal ranking logic, and that “patching” already-written content can degrade quality further.
Finally, AI summarizers are framed as a feature, not a standalone SaaS. Major ecosystems already provide free summaries—such as Google Workspaces email summaries and meeting recaps in Google Meet—making differentiation difficult. Even if a new product could offer value, it faces a major trust barrier: accessing sensitive inboxes, private meetings, and internal calendars is not something companies hand to small startups easily. Taken together, the five categories share the same lesson: avoid ideas where saturation, unit economics, ethical exposure, or platform integration leave little room for a defensible, sustainable business.
Cornell Notes
The transcript lists five AI SaaS categories that are unusually hard for indie hackers to build profitably or safely: standalone AI copywriters, AI coding tools, AI therapists, AI detectors (especially for SEO), and AI summarizers. The reasons vary—market saturation and copycat sameness for copywriting; LLM API costs that scale with usage for coding tools; extreme ethical risk for therapy; misaligned incentives and “cat-and-mouse” dynamics for detectors; and platform-level free integrations plus high trust barriers for summarizers. The practical takeaway is to avoid building products that either can’t differentiate, can’t survive the unit economics, or can’t manage the harm potential. AI capabilities may still work as features inside broader, non-critical products.
Why does standalone AI copywriting struggle as a SaaS business?
What economic mechanism makes AI coding tools risky for bootstrap founders?
What makes AI therapists uniquely dangerous compared with other AI products?
Why are AI detectors a poor solution for SEO content problems?
Why are AI summarizers portrayed as hard to sell as standalone products?
Review Questions
- Which of the five categories is most vulnerable to differentiation failure, and what market behavior drives that outcome?
- How does consumption-based LLM pricing change the economics of AI tools as user adoption grows?
- What ethical or safety factors make mental-health-adjacent AI products harder to run as a small business?
Key Points
- 1
Standalone AI copywriting is portrayed as too saturated and too easy to replicate, pushing products toward identical behavior and price competition.
- 2
LLM API costs that scale with usage can turn growth into a cash drain for AI coding tools, especially for bootstrap teams.
- 3
AI therapists carry unusually high ethical and safety risk, with cited examples of harmful outcomes involving AI chatbots.
- 4
AI detectors for SEO target the wrong problem and encourage a cat-and-mouse strategy that doesn’t guarantee better rankings.
- 5
AI summarizers are difficult to monetize because major ecosystems already provide free, integrated summaries.
- 6
Treating AI capabilities as features inside broader, non-critical products is presented as a more defensible approach than selling them as standalone SaaS.