Get AI summaries of any video or article — Sign up free
I spent 500 hours in ChatGPT, here’s what I learned thumbnail

I spent 500 hours in ChatGPT, here’s what I learned

David Ondrej·
6 min read

Based on David Ondrej's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Use GPT-4o for most everyday tasks, GPT-4.5 for more humanlike conversation, and o3-mini-high for math/science/coding that benefits from deeper reasoning.

Briefing

Spending hundreds of hours in ChatGPT leads to one practical conclusion: the biggest gains come less from “better prompts” and more from using the right model, configuring ChatGPT to match your work, and reducing friction so you actually use it daily. The playbook centers on three core models—GPT-4o for most tasks, GPT-4.5 for more humanlike conversation, and o3-mini-high for harder reasoning like math, science, and coding—so users stop wasting time on the wrong tool for the job.

Beyond model choice, the workflow gets a major upgrade through Projects and Custom Instructions. Projects let people create separate ChatGPT workspaces for different parts of life (for example, one for YouTube), attach relevant files like PDFs, and apply preset instructions that stay active across chats in that project. Customizing ChatGPT via the profile menu turns the assistant into a personal system prompt: users can set preferences such as avoiding small talk, prioritizing clarity and understanding, and steering away from telling them to consult professionals. The result is a version of ChatGPT that behaves consistently—far more useful than a blank account.

Image generation is treated as a business tool rather than a novelty. The key advantage highlighted is not just producing images from text, but converting or editing existing images. By attaching an image and instructing ChatGPT to add elements (like inserting a croissant into a photo), users can iterate toward logos, branding assets, and marketing visuals. The same approach is used for “viral format” thumbnails and ads: take inspiration from a successful thumbnail or advertisement, then prompt ChatGPT to recreate a realistic version in the same style while swapping the subject (e.g., crossing the Sahara instead of the Pacific).

To make ChatGPT part of everyday life, the advice shifts to setup and access. Setting ChatGPT as the default browser tab, installing the desktop app, and using the desktop widget (opened via a keyboard shortcut) all aim to cut delays that cause people to fall back on Google. Deep Research is positioned as the “personal researcher” for multi-step questions, but it requires careful clarification before it starts and can take 5–15 minutes. Access to stronger reasoning models inside Deep Research is presented as a major reason to pay for higher tiers.

Mobile use gets equal emphasis: the phone app supports camera-based analysis (like photographing an ingredient label and asking whether it fits dietary goals) and advanced voice mode for real-time conversation. Voice mode is framed as portable journaling and brainstorming—especially during walks—while the lock-screen widget further reduces friction.

Finally, the transcript lays out practical prompting fundamentals: include examples, use clear descriptive language, assign roles for better perspective, and repeat the most important instruction after a context dump. Canvas is recommended for drafting documents with a split interface (like letters or CV updates), and “rerun” plus model switching is suggested when outputs miss the mark.

Paid plans are discussed with a cost-benefit lens. ChatGPT Plus ($20/month) is portrayed as the best value for most users because it unlocks higher limits, better reasoning access, and Deep Research. The $200/month tier is described as mainly for heavy users—creators, researchers, programmers, or anyone hitting limits—because it adds extended access to reasoning models, advanced voice, and more Deep Research, plus extra access to Sora. Sora is presented as a top video generator for creating ad-style footage from prompts, with the added note that it can also generate images—making it useful for creative workflows without traditional production costs.

Cornell Notes

The core lesson is that ChatGPT becomes dramatically more useful when users treat it like a configured tool, not a generic chatbot. The transcript recommends sticking to three models—GPT-4o for most tasks, GPT-4.5 for more humanlike conversation, and o3-mini-high for deeper reasoning like math, science, and coding. Projects and Custom Instructions help tailor behavior across chats, attach files (like YouTube-related PDFs), and enforce preferences such as avoiding small talk and prioritizing clarity. For heavy information needs, Deep Research can act like a personal researcher, but it requires clarifying questions and takes 5–15 minutes. Finally, reducing friction—default browser tab, desktop widget, and mobile voice/camera—makes people use ChatGPT often enough for the benefits to compound.

How should a user choose among ChatGPT’s models to avoid slow or low-quality outputs?

The transcript recommends three models as a default set: GPT-4o for most problems because it’s fast and supports capabilities like code interpretation, image generation, and web browsing; GPT-4.5 as the favorite for more humanlike conversation, with the tradeoff that it’s slower and not the best for coding/math benchmarks; and o3-mini-high as a reasoning model that performs better on math, science, and coding by “thinking” before answering.

What do Projects and Custom Instructions change about day-to-day ChatGPT use?

Projects create separate workspaces for different life areas (e.g., a YouTube project). Inside a project, users can attach files such as PDFs and set custom instructions that apply across all chats in that project. Customizing ChatGPT via the profile menu adds a persistent layer of preferences—like avoiding small talk, seeking understanding and clarity, and not giving “consult a professional” style responses—so outputs stay consistent without re-prompting every time.

Why is ChatGPT’s image feature framed as more valuable than text-to-image alone?

The transcript argues that the real power is image-to-image editing. Instead of only generating a standalone image from a prompt, users can attach an existing image and ask ChatGPT to insert or modify elements in real time (e.g., adding a croissant into a photo). This supports practical uses like branding, logos, and marketing assets, including recreating realistic YouTube thumbnails based on viral formats.

How can Deep Research be used effectively, and what makes it different from quick Q&A?

Deep Research is positioned as a multi-step researcher that can take 5–15 minutes. It typically asks at least one clarification question before it starts, so users shouldn’t send a vague prompt and walk away. The transcript emphasizes providing detailed constraints (topic focus like LLMs, whether to include all relevant papers, time range like 2025, and whether the goal is practical implementation or theoretical ideas). It also claims Deep Research can access stronger reasoning models not available elsewhere.

What are the transcript’s main tactics for getting better answers when the first response isn’t good?

When an answer is unsatisfactory, the transcript suggests first editing the message (improving the prompt with more context and clearer instructions). If the prompt is already solid, switching models and rerunning is presented as a fast “split test” to see whether another model produces a better result. It also recommends rerunning with the same task rather than abandoning the workflow.

Which setup changes reduce friction and increase how often people use ChatGPT?

The transcript recommends setting ChatGPT as the default browser tab, installing the desktop app, and using the desktop widget via a keyboard shortcut (option/space on Windows is mentioned) to avoid slow tab switching. On mobile, it highlights the app’s camera and advanced voice mode, plus lock-screen widgets for instant access—so users can ask questions or analyze items immediately while out and about.

Review Questions

  1. Which three models does the transcript recommend using most often, and what type of task is each best suited for?
  2. How do Projects and Custom Instructions work together to make ChatGPT outputs more consistent across different goals?
  3. What steps should a user take before starting Deep Research to improve the quality of results?

Key Points

  1. 1

    Use GPT-4o for most everyday tasks, GPT-4.5 for more humanlike conversation, and o3-mini-high for math/science/coding that benefits from deeper reasoning.

  2. 2

    Create Projects for different life domains (like YouTube) so attached files and custom instructions apply automatically across related chats.

  3. 3

    Customize ChatGPT with persistent preferences (e.g., avoid small talk, prioritize clarity, and set response style) to reduce repeated prompting.

  4. 4

    Treat image generation as editing: attach images and request specific transformations for logos, branding, thumbnails, and ad creatives.

  5. 5

    Reduce friction so ChatGPT becomes your default tool—set it as a browser tab, use the desktop widget shortcut, and rely on mobile voice/camera plus lock-screen access.

  6. 6

    Use Deep Research for multi-step questions, but clarify the goal first and expect a 5–15 minute turnaround for stronger results.

  7. 7

    When outputs miss the mark, improve the prompt via edit message first; if the prompt is already strong, rerun with a different model to compare results quickly.

Highlights

The biggest performance jump comes from using the right model for the job: GPT-4o for most tasks, GPT-4.5 for humanlike dialogue, and o3-mini-high for reasoning-heavy math/science/coding.
Projects turn ChatGPT into multiple specialized assistants by combining attached files (like PDFs) with persistent instructions across chats.
ChatGPT’s image editing is framed as the real advantage—attach an image and instruct edits (e.g., inserting a croissant) to produce branding and marketing assets.
Deep Research functions like a personal researcher but requires clarification before it starts and can take 5–15 minutes.
The transcript repeatedly ties success to reduced friction: default browser access, desktop widget shortcuts, and mobile voice/camera/lock-screen widgets make frequent use realistic.

Topics

  • ChatGPT Models
  • Projects and Custom Instructions
  • Image Editing
  • Deep Research
  • Desktop and Mobile Setup
  • Prompting Techniques
  • Canvas and Rerun
  • ChatGPT Plans and Sora

Mentioned