Productivity GOD with AI Agent Data-Driven Decisions (GPT-4)
Based on All About AI's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Collect YouTube channel IDs via HTML inspection, then use the YouTube Data API v3 to fetch live video engagement metrics.
Briefing
A practical workflow turns live YouTube performance data into tightly targeted video concepts by combining the YouTube Data API with GPT-4 and a multi-agent critique loop. The core idea is simple: pull engagement metrics from selected channels, compute a “views-to-likes” popularity score, let GPT-4 extract correlations and generate topic ideas, then use additional agents to pressure-test those ideas for audience fit and engagement hooks—especially fear and FOMO.
The process starts by selecting a handful of tech channels (Marcus Brownlee, Linus Tech Tips, Unbox Therapy, Mrwhosetheboss, and The Verge). For each channel, the script collects the YouTube channel ID by inspecting the channel page’s HTML and searching for “channel ID.” With those IDs, the workflow queries the YouTube Data API v3. On the infrastructure side, it requires a Google Cloud project with the YouTube Data API enabled and an API key, plus an OpenAI API key for GPT-4.
A Python script orchestrates the analysis. It fetches video data from the chosen channels, then prompts GPT-4 to rewrite the top 10 videos (by views) into a structured format: title, views, likes, comments, and duration. A second GPT-4 step ranks videos using a custom “views over likes” ratio—framed as a proxy for popularity—then filters out “news” content to match the creator’s preferences. After ranking, GPT-4 looks for correlations across the dataset to identify trending themes and generate six “out of the box” video ideas (including examples like gaming customization, phone comparisons, and tech gadget “factory tour” style content).
The most distinctive part comes next: a final GPT-4 prompt shifts from general trends to psychology-driven hooks. It asks for three fear-based YouTube ideas designed to maximize engagement, explicitly leaning into fear of privacy loss, fear of danger, and fear of missing out. The resulting shortlist includes concepts such as “The Dark Side of smartphones: Javier privacy is at risk and what you can do to protect yourself,” “VR addiction: Hidden dangers of escaping into a digital world,” and—after critique—the strongest option: “10 tech gadgets you didn’t know could be hacked and how to secure them before it’s too late.”
Two chat-based agents then iterate on the shortlist. One agent pushes for broader audience appeal and more actionable solutions, while the other emphasizes urgency and curiosity. Both converge on the “hacked gadgets” concept, with an additional suggestion to include a short introduction underscoring why cybersecurity matters now. The final decision is saved to a file (bestid.txt) and emailed to the user.
Because the YouTube API pulls live data, rerunning the script can yield different results as channels update. The workflow is positioned as a starting point—expandable with better prompts, improved scoring logic, and additional data sources like Reddit and Twitter—while already producing a test video concept on the creator’s separate “GPT and me” channel.
Cornell Notes
The workflow builds a data-to-content pipeline: it pulls live engagement data from selected YouTube channels using the YouTube Data API v3, then uses GPT-4 to rank and interpret that data. A custom “views-to-likes” ratio helps identify videos with strong engagement signals, and GPT-4 extracts correlations to generate both general trending topics and psychology-driven concepts. The final ideation step intentionally targets fear and FOMO to maximize click appeal, producing three candidate video ideas. Two additional agents critique and refine those ideas, converging on a final concept focused on cybersecurity urgency: “10 tech gadgets you didn’t know could be hacked and how to secure them before it’s too late.” This matters because it turns constantly changing platform metrics into repeatable, testable content hypotheses.
How does the pipeline turn raw YouTube metrics into a ranking signal?
What prompts are used to move from data to content ideas?
Why does the workflow include multiple agents after GPT-4 generates ideas?
What final video concept wins, and what feedback drives that choice?
What makes the approach “data-driven” over time rather than a one-off brainstorm?
What infrastructure is required to run the system end-to-end?
Review Questions
- If you changed the scoring metric from views/likes to views/comments, what downstream effects would you expect on the GPT-4 correlation step and the final fear-based ideation?
- Which prompt step is responsible for generating fear/FOMO hooks, and what specific constraints does it apply (e.g., content exclusions)?
- How do the two agents’ critique criteria differ, and how does that difference influence the final selection?
Key Points
- 1
Collect YouTube channel IDs via HTML inspection, then use the YouTube Data API v3 to fetch live video engagement metrics.
- 2
Compute a “views-to-likes” ratio (and use its log) to rank videos before feeding structured data into GPT-4.
- 3
Use GPT-4 in stages: rewrite top videos into a fixed schema, rank by the chosen metric, extract correlations, then generate topic ideas.
- 4
Generate three candidate concepts specifically optimized for fear and fear of missing out, rather than generic trending topics.
- 5
Run a multi-agent critique loop to enforce audience breadth, urgency, and—crucially—actionable solutions.
- 6
Converge on a final idea, save it to bestid.txt, and email the result for quick iteration and testing.
- 7
Rerun the pipeline periodically because live API data can shift rankings and correlations, producing new content hypotheses.