Get AI summaries of any video or article — Sign up free
How To Find BRILLIANT SaaS Ideas thumbnail

How To Find BRILLIANT SaaS Ideas

Simon Høiberg·
5 min read

Based on Simon Høiberg's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Treat “brilliant” SaaS ideas as hypotheses that must survive stress tests for money, feasibility, and demand before building an MVP.

Briefing

A “brilliant” SaaS idea isn’t a lucky spark—it’s something that survives brutal stress tests for money, execution, and demand. The core lesson from a failed 100-day startup sprint is that even a seemingly motivating concept (like development leaderboards) can collapse if the problem isn’t valuable, the team can’t realistically build it, or users won’t pay.

The story begins in late 2019, when a software consultant and a colleague joked about launching a tech startup in 100 days. The mission quickly narrowed to building and launching a SaaS product within the deadline. The first idea came from sales: in sales, leaderboards displayed who ranked highest on revenue and performance, and the visibility drove motivation. The same logic seemed missing in software development—so the plan was to create a dashboard/leaderboard showing metrics like who reviewed the most PRs and who committed the most changes, with managers able to report on team performance.

Validation looked promising at first. A landing page and social posts generated newsletter signups, and the MVP was built over the next 90 days. On day 100, outreach went out via email, Reddit, Indie Hackers, and Product Hunt. Then the results turned: no one from the newsletter converted, and there were no meaningful upgrades. After watching the numbers for days, the conclusion arrived—this was a major failure, not a slow burn.

That collapse becomes the foundation for three tests designed to prevent “triangle-broken” ideas from reaching the build stage. The first test is “dogfooding”: can the founder realistically use the product while building it, and do they have the experience to solve the problem at scale? The leaderboard concept failed here because development teams aren’t sales teams, and the founder lacked the product-owner or team-performance context needed to judge feasibility and user desire.

The second test asks whether the problem is worth solving by placing it into a four-quadrant map: big/infrequent, small/frequent, small/infrequent, or big/frequent. The leaderboard problem was treated as a small and infrequent need—especially since tools like Jira already provide strong project-level reporting—so the “team performance leaderboard” wasn’t a sufficiently urgent pain.

The third test is affordability: can the business sustain the economics of the product at a price users will pay, given acquisition costs, onboarding/support burden, and per-user expenses (including infrastructure and AI usage)? The founder’s later experience with FeedHive shows how pricing must match both market willingness and cost structure. Starting at $5/month for a social media management tool proved unrealistic due to competitive acquisition and expensive third-party integrations and AI. Pricing was adjusted to $19, $29, or $99 per month, aligning with a problem that’s both big and frequent and with deep domain expertise to keep the product competitive.

The takeaway is blunt: a SaaS idea needs to close the viability–feasibility–desirability triangle and pass aggressive checks before time and money get sunk into an MVP. The founder’s second attempt, FeedHive, succeeded because it cleared those gates—while the leaderboard concept didn’t.

Cornell Notes

The founder’s 100-day SaaS sprint produced a “brilliant” idea that still failed: a development leaderboard meant to motivate engineers and help managers report performance. Newsletter signups didn’t translate into paid upgrades, revealing a broken viability–feasibility–desirability triangle. Three stress tests prevent this: (1) dogfooding—does the founder truly understand and have the experience to build for the problem at scale? (2) problem worth solving—does the pain happen often enough and matter enough to justify payment, rather than duplicating existing tools like Jira? (3) affordability—can the product be priced to cover acquisition, support, and per-user costs, including AI and integration expenses? FeedHive succeeded after pricing and economics were aligned with a big, frequent user problem.

Why did the development leaderboard idea fail despite early interest?

It generated newsletter signups but produced zero meaningful paid conversions after launch outreach (email, Reddit, Indie Hackers, Product Hunt). The underlying issue was a broken triangle: the founder lacked the experience to judge feasibility and desirability for development-team workflows, and the “team performance leaderboard” wasn’t a big, frequent pain. Existing tooling like Jira already covers much of the reporting need at the project level, leaving only occasional bottlenecks that could be solved differently.

What does “dogfooding” mean in this framework, and how does it connect to feasibility?

Dogfooding means the founder can personally use the product while building it and has enough domain experience to understand the problem at scale. The transcript stresses that experience matters beyond having a general idea—if someone hasn’t managed large recruiting cycles, they shouldn’t build HR software; if they haven’t run agencies or tracked hours at scale, they shouldn’t build time tracking for that context. For the leaderboard, the founder failed to account for cultural differences between sales and development teams, making the solution less realistic and less motivating.

How does the “problem worth solving” test use quadrants to filter ideas?

It classifies the target pain as big/infrequent, small/frequent, small/infrequent, or big/frequent. Big and infrequent problems can still be monetizable because users pay when the pain hits. Small but frequent problems also work because recurring value adds up. Small and infrequent problems are a warning sign: users may notice it, but it’s neither urgent nor costly enough to drive payment. The best case is big and frequent, where users feel the pain often and will pay to remove it.

What does the affordability test measure, and why isn’t “raise prices” always the answer?

Affordability compares revenue per user against expected expenses per user. High price can be fine if costs are also high (e.g., server costs, onboarding, customer support, or high acquisition costs). Low price can also work if overhead is low (simple product, low support needs, low server costs, and word-of-mouth or low-cost acquisition). The transcript warns against blindly raising prices; instead, the key is whether the economics support the chosen price given complexity, acquisition competitiveness, and per-user costs.

How did FeedHive’s pricing change reflect the affordability test?

FeedHive launched public beta at $5/month, but the model didn’t fit the economics: it’s a social media management tool dependent on third-party integrations that can be imperfect, and it uses expensive AI services. User acquisition also came via ads in a competitive space, making $5/month unsustainable. Pricing was adjusted to $19, $29, or $99 per month, matching a big, frequent user problem and leveraging the team’s social media expertise to stay competitive.

Review Questions

  1. Which part of the viability–feasibility–desirability triangle was most clearly broken in the leaderboard example, and what evidence from the launch supports that?
  2. Pick a hypothetical SaaS idea and classify its core problem into one of the four quadrants (big/infrequent, small/frequent, etc.). What would you change if it lands in the small/infrequent quadrant?
  3. For a SaaS with AI-heavy costs and ad-based acquisition, what pricing and product constraints would you need to verify to pass the affordability test?

Key Points

  1. 1

    Treat “brilliant” SaaS ideas as hypotheses that must survive stress tests for money, feasibility, and demand before building an MVP.

  2. 2

    Dogfooding isn’t optional: founders should be able to use the product themselves and have the experience to solve the problem at the scale they’re targeting.

  3. 3

    Classify the user pain by frequency and magnitude; small/infrequent problems are a major warning sign because they rarely justify payment.

  4. 4

    Affordability is about revenue per user versus per-user expenses, including infrastructure, AI usage, onboarding/support, and acquisition costs.

  5. 5

    Existing tools can make a new SaaS unnecessary; if the value overlaps heavily with platforms like Jira, the problem may not be worth solving.

  6. 6

    Pricing must match both cost structure and market willingness; competitive acquisition and expensive integrations can force higher price points.

  7. 7

    A failed first attempt can still be productive if it reveals which part of the triangle was broken and informs a second, better-aligned launch.

Highlights

A leaderboard for development performance looked like a direct transplant from sales motivation, but it failed to convert newsletter interest into paid upgrades.
The framework’s “problem worth solving” test warns that small/infrequent pains often fail monetization even when users recognize the issue.
FeedHive’s pricing adjustment—from $5/month to $19/$29/$99—was driven by affordability realities: AI costs, integration friction, and competitive ad acquisition.
The affordability test reframes pricing as an economics fit, not a universal rule to “raise prices.”

Topics

Mentioned