Get AI summaries of any video or article — Sign up free
They BEAT Open AI at Their OWN GAME! thumbnail

They BEAT Open AI at Their OWN GAME!

MattVidPro·
5 min read

Based on MattVidPro's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Better ChatGPT is an open-source, API-based alternative to ChatGPT that adds workflow tools and per-chat customization.

Briefing

A new open-source project called “Better ChatGPT” is positioning itself as a more powerful, more customizable alternative to ChatGPT—without locking users into a single hosted experience. Built on the ChatGPT API, it adds practical workflow features (prompt libraries, chat organization, local storage, export/import, and cloud sync) and deeper control over how each conversation behaves, including per-chat system prompts and adjustable model parameters. The pitch matters because it shifts capability from a fixed consumer app toward something developers and advanced users can tailor, host locally, or even run in regions where ChatGPT access is limited.

Better ChatGPT’s core setup offers three routes: using an API key (with the user paying for their own API usage), using a “free ChatGPT API” option for regions without access, or hosting a self-managed API endpoint locally on Windows, macOS, or Linux. The project is presented as fully open source on GitHub, meaning developers can download and modify it. That flexibility is paired with features aimed at day-to-day productivity: a proxy to bypass regional restrictions, a built-in prompt library, folder-based chat organization with colors, token-count and pricing visibility per chat, and the ability to share conversations and prompts via ShareGPT integration.

Where the project most clearly differentiates itself is conversation control. Users can manipulate the “system prompt” that defines the assistant’s role and behavior, and the interface exposes this as a first-class setting. By changing the system prompt (for example, switching between “assistant,” “user,” or other roles), the assistant’s responses can be steered to match the intended context. The transcript demonstrates extreme customization: setting the assistant to behave like a “pet hamster” that speaks only in phonetic hamster sounds, and using system-prompt manipulation to make the model produce content that would normally be blocked by standard jailbreak attempts. The workflow also supports editing, reordering, and inserting messages, plus generating and saving chat titles automatically.

Better ChatGPT also expands model options and context length. The interface includes selectable models such as “gpt-3.5-turbo-16k” (described as supporting up to 16,000 tokens) and “gpt-4” variants with larger context windows (including a 32,000-token option mentioned in the transcript). Adjustable generation settings—temperature, top-p, presence penalty, and frequency penalty—are presented as controls for randomness and repetition, with guidance to keep defaults for most users. The practical payoff is clear: longer context windows can reduce the common problem of losing earlier conversation details when tokens run out.

In use, the project looks and feels close to ChatGPT, but with added settings panels, light/dark mode, and toggles like “enter to submit.” Users can import prompt packs via CSV, create reusable prompts, and organize chats into colored folders that preserve per-chat settings. The transcript closes with a comparison: unless someone already pays for ChatGPT Plus, there may be little reason not to try Better ChatGPT—especially for users who want more control, longer context, and the ability to run or host the system themselves.

Cornell Notes

Better ChatGPT is an open-source, API-based alternative to ChatGPT that adds both workflow features and deeper control over how each conversation behaves. It supports multiple ways to use it: entering an OpenAI API key, using a free API option for regions without access, or hosting locally on Windows, macOS, or Linux. The interface adds prompt libraries, chat folders with colors, token/pricing breakdowns, import/export, and cloud sync. Most importantly, it exposes per-chat system prompts and message manipulation, letting users steer the assistant’s role and output style. Longer context models (including 16k and 32k options mentioned) aim to reduce context loss when chats get large.

What are the main ways to start using Better ChatGPT, and what trade-offs come with each?

The transcript lists three entry paths: (1) use the website’s API menu by pasting an OpenAI API key from the OpenAI “API Keys” page—this requires paying for your own API usage; (2) use a “free ChatGPT API” option for users in regions with no access to ChatGPT—positioned as a way to access the API without paying directly; and (3) host your own API endpoint locally by following instructions in the project, allowing the app to run on Windows, macOS, or Linux and giving more control over security and deployment.

How does Better ChatGPT change conversation behavior compared with standard ChatGPT?

It elevates the system prompt into a user-editable control. Users can set the system prompt to different roles (assistant/user) to shift the conversation context, and can define custom behavior—such as instructing the assistant to act like a “pet hamster” that speaks only in phonetic hamster sounds. The transcript also demonstrates message-level manipulation (editing/replacing responses and inserting examples) that can make the model behave differently than it would under default settings.

What practical features help users manage many chats and prompts?

The project includes a prompt library (with an example English translator prompt and the ability to import CSV prompt packs), folder-based organization with color coding, and tools to filter chats by token count and pricing. It also supports ShareGPT integration for sharing prompts/conversations, plus import/export and downloading chats in multiple file types, and syncing chats to Google Drive and an Azure OpenAI endpoint.

Which model and generation settings are highlighted, and why do they matter?

The transcript emphasizes selectable models with larger context windows, including “gpt-3.5-turbo-16k” (16,000 tokens) and “gpt-4” with up to 32,000 tokens mentioned, compared with an 8,000-token max for standard “gpt-4.” It also discusses temperature (0 to 1, with lower being more deterministic and higher being more random), top-p as an alternative sampling control, and presence/frequency penalties to reduce repetition or encourage new topics. Defaults are recommended to avoid unstable outputs.

What demonstrations suggest about “jailbreak” resistance and system-prompt steering?

A jailbreak prompt copied from a popular source is pasted into the system prompt, and the transcript claims the assistant becomes more permissive than standard ChatGPT. It also shows a tactic: deleting the original response and providing an example in the assistant prompt containing disallowed content, which then influences the model’s next output. The takeaway is that system-prompt and contextual examples can materially change refusal behavior.

Review Questions

  1. How does per-chat system prompt editing affect the assistant’s role and output style, and what example from the transcript illustrates this?
  2. Why might longer context window models (16k/32k) be more useful than standard context limits for long research-style chats?
  3. What risks or downsides does the transcript hint at when generation settings like temperature are pushed too far?

Key Points

  1. 1

    Better ChatGPT is an open-source, API-based alternative to ChatGPT that adds workflow tools and per-chat customization.

  2. 2

    Users can start via an OpenAI API key, a “free ChatGPT API” option for restricted regions, or by hosting their own endpoint locally on Windows, macOS, or Linux.

  3. 3

    The interface includes prompt libraries, folder organization with colors, token/pricing breakdowns, and import/export plus cloud sync options.

  4. 4

    Per-chat system prompts and message manipulation let users steer the assistant’s behavior—ranging from role changes to highly specific speaking styles.

  5. 5

    Model selection includes larger context options such as gpt-3.5-turbo-16k and gpt-4 variants with up to 32,000 tokens mentioned, aiming to reduce context loss.

  6. 6

    Generation controls (temperature, top-p, presence penalty, frequency penalty) can improve or destabilize outputs depending on how they’re tuned.

  7. 7

    ShareGPT integration and the ability to share prompts/conversations broaden reuse beyond individual chats.

Highlights

Better ChatGPT exposes the system prompt as a user-editable control, enabling per-chat role and behavior changes (including a “pet hamster” speaking mode).
Longer context models are a central selling point, with 16k and 32k token options mentioned to keep more of a conversation available at once.
The transcript demonstrates that system-prompt manipulation plus contextual examples can alter refusal behavior compared with default ChatGPT settings.

Topics

Mentioned