Get AI summaries of any video or article — Sign up free
Make Money with GPTs, Here’s How thumbnail

Make Money with GPTs, Here’s How

David Ondrej·
5 min read

Based on David Ondrej's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Revenue sharing for GPT creators is expected soon, so building and gaining early traction now can matter more than waiting for the program to be fully live.

Briefing

OpenAI’s GPT Store is becoming a fast path to income because revenue sharing for GPT creators is expected to roll out soon—while the pool of builders is still relatively small. The core bet: build a custom GPT now, get it into the top charts early, and ride the demand surge created by broad access to GPTs across tens of millions of users. With no built-in “best GPT” ranking algorithm yet, early traction—often the first 100 to 1,000 chats—can determine whether a GPT gains momentum.

The practical starting point is finding GPTs in the store (via the sidebar “Explore GPTs”) and studying what’s already working. The strategy is to search by niche and look for gaps: if a niche has few GPTs with very high chat counts (for example, 100K+), there’s room to build something better. Builders are advised to choose either a mass-market approach (aiming for virality) or a super-niche approach (aiming for tens of thousands of chats more reliably), then iterate based on real user behavior.

Building the GPT itself hinges on configuration choices—especially the system prompt. The guidance is blunt: mediocre GPTs come from spending too little time on the system prompt, while strong GPTs come from hours (often 10–20+), refined through repeated “user-like” testing across many scenarios. The system prompt functions as the GPT’s operating instructions, and the quality of that prompt is treated as the main differentiator.

Beyond instructions, the transcript emphasizes three technical levers. First is “Knowledge,” which lets a GPT reference proprietary or copyrighted documents (like PDFs) using retrieval-augmented generation—useful because much of the web is effectively inaccessible to standard browsing (e.g., content behind logins). Second is “Capabilities,” where enabling everything by default is discouraged; web browsing, image generation, and code interpreter should be turned on only when they match the GPT’s purpose. Third is “Actions,” which connect the GPT to external services through APIs using JSON schema, enabling custom functions such as pulling live weather or triggering workflows.

To get the first wave of users, the transcript lays out three traction methods: (1) content marketing that demonstrates the GPT solving a real problem (tutorials, shorts, or posts), (2) cold outreach via DMs or comments to niche communities without needing a large following, and (3) partnering with mid-sized influencers (roughly 10,000–200,000 followers) to promote the GPT to their audience. The goal is consistent daily effort until the GPT clears the early threshold and starts generating reviews and organic visibility.

Finally, there’s a cautionary tale about losing access to GPTs. Switching from GPTs Team back to a lower plan can make previously created GPTs uneditable or inaccessible due to a one-way limitation/bug. The advice: decide on the plan carefully before upgrading, because the creator’s ability to maintain and improve a GPT may depend on staying on the right tier. Overall, the transcript frames GPT building as a time-intensive but unusually open opportunity: build early, optimize the system prompt, add real differentiators (knowledge and actions), and push hard for initial chats before the revenue share model fully lands.

Cornell Notes

The transcript argues that GPT creators can position themselves for revenue sharing by building custom GPTs early, before the market becomes crowded. Success depends less on flashy setup and more on investing substantial time (often 10–20+ hours) into the system prompt and testing it like a user across many scenarios. Differentiation comes from adding useful “Knowledge” (PDFs and proprietary data via retrieval-augmented generation), selecting the right “Capabilities” (avoid enabling everything), and building “Actions” that connect to external APIs through JSON schema. Because GPT Store ranking lacks a clear “best GPT” algorithm, creators must manufacture early traction—typically the first 100 to 1,000 chats—using content, cold outreach, or influencer partnerships. A final warning: switching plans can cause loss of access to GPTs, so plan upgrades should be treated as potentially irreversible.

Why does early GPT building matter more than waiting for revenue sharing to launch?

Revenue sharing for GPT creators is described as imminent but not yet implemented. That timing creates a window where demand may rise (GPTs can reach a very large user base) while the number of active builders is still relatively low. The transcript frames this as an opportunity to become visible before competition intensifies, especially because there’s no guarantee the store will automatically promote the “best” GPTs.

What’s the biggest quality lever when building a GPT?

The system prompt. The transcript repeatedly emphasizes that mediocre GPTs come from spending too little time on system instructions, while strong GPTs come from hours of refinement. It also stresses iterative testing: after writing the prompt, the builder should “prompt it as if you’re the user,” covering many situations and refining based on outcomes.

How do “Knowledge” and retrieval-augmented generation help a GPT stand out?

“Knowledge” lets a GPT reference documents that aren’t accessible through normal web access—such as proprietary PDFs or copyrighted materials the creator has rights to. The transcript ties this to retrieval-augmented generation: the GPT can pull relevant passages from those documents instead of pretending it “knows everything.” It also notes that much of the web is effectively inaccessible to general search (e.g., behind logins), so private or specialized data can be a major differentiator.

Why shouldn’t builders enable every capability by default?

Capabilities act like permissions. The transcript warns that turning on image generation for a GPT that’s meant for fitness advice can cause incorrect behavior when users ask for images. The recommendation is to enable only what matches the GPT’s intended job—web browsing when needed, code interpreter when math/coding is useful, and avoid image generation unless the GPT is designed for it.

What are “Actions,” and why are custom functions valuable?

Actions connect a GPT to external services via API calls defined using JSON schema. The transcript gives the weather example: if a GPT has an action like “get current weather,” it can call the function when a user asks about weather. Custom functions are positioned as a key differentiator because most builders won’t invest the effort to wire up real integrations.

How do creators get the first 100–1,000 chats that kick off organic growth?

Three methods are recommended: (1) content marketing that demonstrates the GPT solving a specific problem (tutorials, shorts, posts), (2) cold outreach—DMs or comments to active niche accounts without needing a big following—using the GPT as a free offer, and (3) influencer partnerships with mid-sized creators (roughly 10,000–200,000 followers) who can promote the GPT to their audience. The aim is early momentum so reviews and visibility can follow.

Review Questions

  1. What specific parts of GPT configuration are treated as the main drivers of user adoption (and why)?
  2. How does the transcript suggest identifying market gaps inside the GPT Store?
  3. What risks does the transcript highlight when switching GPT plan tiers, and what behavior should a creator avoid?

Key Points

  1. 1

    Revenue sharing for GPT creators is expected soon, so building and gaining early traction now can matter more than waiting for the program to be fully live.

  2. 2

    The system prompt is the primary quality lever; serious GPTs require extensive writing and iterative user-like testing (often 10–20+ hours).

  3. 3

    “Knowledge” (PDFs and proprietary documents) can differentiate GPTs by enabling retrieval-augmented generation instead of relying on generic knowledge.

  4. 4

    Capabilities should be enabled selectively; turning on tools like image generation when they don’t match the GPT’s purpose can create bad outputs.

  5. 5

    “Actions” let GPTs call external APIs via JSON schema; custom functions are a major differentiator because they enable real workflows.

  6. 6

    Early growth depends on manufactured traction (first 100–1,000 chats) since there’s no clear algorithm yet that automatically promotes the best GPTs.

  7. 7

    Plan switching can cause loss of access to previously created GPTs; upgrades and downgrades should be treated as potentially one-way decisions.

Highlights

The system prompt is framed as the make-or-break factor: hours of refinement and scenario testing separate GPTs that get tens of thousands of chats from those that stall.
A major differentiator is adding real “Knowledge” and “Actions,” not just turning on every capability—selective permissions prevent mismatched behavior.
Because store ranking lacks a “best GPT” mechanism, creators must engineer early momentum through content, cold outreach, or influencer partnerships.
Switching from GPTs Team back to a lower plan can make GPTs uneditable or inaccessible, turning maintenance into a risk.

Topics

Mentioned