How I Generate +100 SaaS Ideas
Based on Simon Høiberg's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Mine dissatisfaction in established markets by searching for “alternative to [tool]” and focusing on solvable complaints.
Briefing
Generating SaaS ideas at scale isn’t about finding a single “genius” concept—it’s about building a repeatable pipeline that turns public customer frustration into testable product bets. The core approach centers on starting with established markets, mining complaints for specific pain points, and then validating the pattern before committing time or money. That matters because most ideas won’t pan out; the goal is to run enough structured experiments that at least one can become profitable.
The framework begins with choosing a market where demand already exists. Instead of inventing from scratch, it targets “busy internet streets” with heavy traffic: users already know the category, and the opportunity is to serve people who are unhappy with the current tools. A practical method is to use Reddit to search for “alternative to [tool]” and open posts that look relevant. Each candidate is then filtered using two questions: whether building an alternative is realistically viable, and whether the complaints can actually be addressed. The process intentionally weeds out ideas that are too ambitious, too uncertain, or based on issues that may not be solvable.
Once a shortlist forms, the next step adds stronger evidence than a single Reddit comment. G2 is used to locate product reviews for the tool being targeted, with a focus on the lowest-rated feedback. The goal is to find recent, specific complaints—ideally reviews from the last three months—that describe concrete pain points and what users would have wanted instead. The method calls for gathering at least five to ten additional reviews and then looking for common themes. If multiple users independently mention similar issues—like laggy performance, difficult navigation, or missing features—those repeated signals become the raw material for a new SaaS concept. In the example, the pattern points toward a “more lightweight” alternative that’s easier to use and runs faster in the browser.
After qualitative research, the process adds a newer, higher-volume tactic: collecting visual evidence of bad reviews and using ChatGPT to synthesize them into a business case. The workflow suggests using a Chrome extension (Go full page) to capture multiple full pages of negative reviews from sites like G2, then optionally expanding to sources such as Trustpilot or Facebook. The images are uploaded to ChatGPT, which produces a detailed business case and a specific outline for a SaaS idea. The emphasis here is less on perfect data quality and more on generating enough structured insight to spot patterns.
Before building, the framework introduces a cooling-off period. The idea should be added to an idea database, then revisited after one or two weeks to see whether excitement fades or persists. Persistent interest signals a stronger candidate. Even then, the advice is to stay diversified: push the best idea forward alongside other ideas rather than betting everything on a single concept. The end goal is an ongoing “SaaS factory” that continuously generates and tests multiple bets, accepting that only a few will succeed.
Cornell Notes
The method for generating SaaS ideas relies on mining dissatisfaction in established markets, then validating repeated complaints before committing to a build. It starts with Reddit searches for “alternative to [tool]” to identify users who want a different option, followed by filtering for viability and solvable complaints. G2 reviews then provide higher-signal evidence: recent (within ~3 months), specific, low-rated feedback is collected and checked for common pain points. For scale, full-page screenshots of negative reviews can be fed into ChatGPT to produce a business case and a concrete idea outline. Finally, ideas are stored and revisited after 1–2 weeks, and the founder keeps a diversified portfolio rather than betting everything on one concept.
Why start with “alternatives” to existing tools instead of inventing new categories?
How does the framework filter Reddit “alternative to [tool]” posts so the list doesn’t explode?
What does “good evidence” look like when using G2 to validate an idea?
How does the process scale beyond manual reading of reviews?
What’s the purpose of the 1–2 week “idea cooling” period?
Why keep a diversified portfolio even after finding a promising idea?
Review Questions
- When mining Reddit for “alternative to [tool],” which two questions determine whether a candidate idea stays on the shortlist?
- What specific criteria make a G2 review “useful” for this framework (recency, specificity, and what else)?
- How do screenshots + ChatGPT synthesis change the speed and nature of idea generation compared with manual review reading?
Key Points
- 1
Mine dissatisfaction in established markets by searching for “alternative to [tool]” and focusing on solvable complaints.
- 2
Filter early using two checks: viability for the founder and whether the stated complaints can realistically be addressed.
- 3
Use G2 to collect recent, low-rated reviews (within ~3 months) and extract recurring, actionable pain points.
- 4
Look for common patterns across at least 5–10 reviews before treating a concept as a real SaaS candidate.
- 5
Scale qualitative research by screenshotting multiple pages of negative reviews and using ChatGPT to synthesize a business case and idea outline.
- 6
Store ideas in a database and revisit after 1–2 weeks to test whether motivation persists.
- 7
Keep diversification: advance multiple ideas in parallel rather than betting everything on one launch.