How To Find BRILLIANT SaaS Ideas
Based on Simon Høiberg's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Treat “brilliant” SaaS ideas as hypotheses that must survive stress tests for money, feasibility, and demand before building an MVP.
Briefing
A “brilliant” SaaS idea isn’t a lucky spark—it’s something that survives brutal stress tests for money, execution, and demand. The core lesson from a failed 100-day startup sprint is that even a seemingly motivating concept (like development leaderboards) can collapse if the problem isn’t valuable, the team can’t realistically build it, or users won’t pay.
The story begins in late 2019, when a software consultant and a colleague joked about launching a tech startup in 100 days. The mission quickly narrowed to building and launching a SaaS product within the deadline. The first idea came from sales: in sales, leaderboards displayed who ranked highest on revenue and performance, and the visibility drove motivation. The same logic seemed missing in software development—so the plan was to create a dashboard/leaderboard showing metrics like who reviewed the most PRs and who committed the most changes, with managers able to report on team performance.
Validation looked promising at first. A landing page and social posts generated newsletter signups, and the MVP was built over the next 90 days. On day 100, outreach went out via email, Reddit, Indie Hackers, and Product Hunt. Then the results turned: no one from the newsletter converted, and there were no meaningful upgrades. After watching the numbers for days, the conclusion arrived—this was a major failure, not a slow burn.
That collapse becomes the foundation for three tests designed to prevent “triangle-broken” ideas from reaching the build stage. The first test is “dogfooding”: can the founder realistically use the product while building it, and do they have the experience to solve the problem at scale? The leaderboard concept failed here because development teams aren’t sales teams, and the founder lacked the product-owner or team-performance context needed to judge feasibility and user desire.
The second test asks whether the problem is worth solving by placing it into a four-quadrant map: big/infrequent, small/frequent, small/infrequent, or big/frequent. The leaderboard problem was treated as a small and infrequent need—especially since tools like Jira already provide strong project-level reporting—so the “team performance leaderboard” wasn’t a sufficiently urgent pain.
The third test is affordability: can the business sustain the economics of the product at a price users will pay, given acquisition costs, onboarding/support burden, and per-user expenses (including infrastructure and AI usage)? The founder’s later experience with FeedHive shows how pricing must match both market willingness and cost structure. Starting at $5/month for a social media management tool proved unrealistic due to competitive acquisition and expensive third-party integrations and AI. Pricing was adjusted to $19, $29, or $99 per month, aligning with a problem that’s both big and frequent and with deep domain expertise to keep the product competitive.
The takeaway is blunt: a SaaS idea needs to close the viability–feasibility–desirability triangle and pass aggressive checks before time and money get sunk into an MVP. The founder’s second attempt, FeedHive, succeeded because it cleared those gates—while the leaderboard concept didn’t.
Cornell Notes
The founder’s 100-day SaaS sprint produced a “brilliant” idea that still failed: a development leaderboard meant to motivate engineers and help managers report performance. Newsletter signups didn’t translate into paid upgrades, revealing a broken viability–feasibility–desirability triangle. Three stress tests prevent this: (1) dogfooding—does the founder truly understand and have the experience to build for the problem at scale? (2) problem worth solving—does the pain happen often enough and matter enough to justify payment, rather than duplicating existing tools like Jira? (3) affordability—can the product be priced to cover acquisition, support, and per-user costs, including AI and integration expenses? FeedHive succeeded after pricing and economics were aligned with a big, frequent user problem.
Why did the development leaderboard idea fail despite early interest?
What does “dogfooding” mean in this framework, and how does it connect to feasibility?
How does the “problem worth solving” test use quadrants to filter ideas?
What does the affordability test measure, and why isn’t “raise prices” always the answer?
How did FeedHive’s pricing change reflect the affordability test?
Review Questions
- Which part of the viability–feasibility–desirability triangle was most clearly broken in the leaderboard example, and what evidence from the launch supports that?
- Pick a hypothetical SaaS idea and classify its core problem into one of the four quadrants (big/infrequent, small/frequent, etc.). What would you change if it lands in the small/infrequent quadrant?
- For a SaaS with AI-heavy costs and ad-based acquisition, what pricing and product constraints would you need to verify to pass the affordability test?
Key Points
- 1
Treat “brilliant” SaaS ideas as hypotheses that must survive stress tests for money, feasibility, and demand before building an MVP.
- 2
Dogfooding isn’t optional: founders should be able to use the product themselves and have the experience to solve the problem at the scale they’re targeting.
- 3
Classify the user pain by frequency and magnitude; small/infrequent problems are a major warning sign because they rarely justify payment.
- 4
Affordability is about revenue per user versus per-user expenses, including infrastructure, AI usage, onboarding/support, and acquisition costs.
- 5
Existing tools can make a new SaaS unnecessary; if the value overlaps heavily with platforms like Jira, the problem may not be worth solving.
- 6
Pricing must match both cost structure and market willingness; competitive acquisition and expensive integrations can force higher price points.
- 7
A failed first attempt can still be productive if it reveals which part of the triangle was broken and informs a second, better-aligned launch.