Get AI summaries of any video or article — Sign up free
You've Been Using AI the Hard Way (Use This Instead) thumbnail

You've Been Using AI the Hard Way (Use This Instead)

NetworkChuck·
5 min read

Based on NetworkChuck's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Terminal AI workflows can eliminate fragmented browser chats by using local, persistent context files that new sessions automatically load.

Briefing

Using AI in a terminal beats using it in a browser because it puts project context and files under the user’s control—no more juggling dozens of chat windows, losing context, or rebuilding the same “research” from scratch. The core workflow centers on local context files (like a Gemini “gemini.md” file) that the AI reads every time a new session starts, so the system can pick up where work left off without re-explaining everything.

The transcript walks through a practical setup with Gemini CLI. After installing Google Gemini CLI, the user logs in with a Google account and asks for a task (e.g., coffee research). The terminal experience reveals two advantages the browser hides: a visible context window and the ability to write files directly on the computer. A key demo has the AI produce a markdown file (Best Coffee Method.md) and then generate a project context file (gemini.md) by scanning the working directory. That file becomes the “source of truth” for future sessions. When a new Gemini session starts in the same folder, it automatically loads gemini.md, letting the user request new work (like writing a blog intro) without re-supplying context. The result is fewer fragmented chats and a single, persistent project memory stored locally on disk.

The same pattern expands beyond Gemini. Cloud Code (Claude in the terminal) adds multi-agent capabilities: the user can spawn “agents” that operate with fresh context windows, delegate tasks, and keep the main conversation from bloating. A demo creates a “home lab research expert” agent, then references it from a prompt so it can run web research and compile results. The transcript emphasizes that agents act like separate workers with their own context budgets, which helps avoid runaway token usage and reduces bias from long, one-thread conversations. It also shows “dangerous” modes and headless execution to run multiple tools and agents in parallel.

To keep different AI tools synchronized, the transcript describes a file-based standard: Claude, Gemini, and Codex-style agent context files are kept aligned in the same directory (e.g., Claude MD, Gemini MD, and Agents MD). The user runs multiple terminal AIs simultaneously—each with different strengths (deep work vs. high-level analysis)—while sharing the same local context. The work stays portable because it lives in a folder on the hard drive, not inside a locked browser session.

For project hygiene, an “agent script session closer” automates end-of-day cleanup: it summarizes progress, updates session logs, refreshes context files, and commits changes to a GitHub repository. That creates an audit trail of decisions and makes it easier to resume later.

Finally, the transcript introduces Open Code, an open-source terminal AI tool that can use local models or integrate with a Cloud Pro subscription, including features like session sharing and timeline-style restoration. The overall message is that terminal-first AI turns context into something the user owns—enabling faster iteration, safer organization, and a workflow that can scale from single prompts to multi-agent projects.

Cornell Notes

Terminal-based AI is presented as a faster, more controllable alternative to browser chat because it exposes context and can read/write local files. The workflow hinges on persistent markdown context files (like gemini.md) that the AI loads automatically in new sessions, preventing the “lost context” spiral common in browser workflows. Cloud Code (Claude in the terminal) adds multi-agent delegation, letting separate agents work with fresh context windows so the main thread stays usable. Keeping Claude, Gemini, and Codex context files synchronized in the same project folder lets multiple AIs collaborate without copying and pasting. GitHub commits and an end-of-day “session closer” agent help maintain documentation and resume work reliably.

Why does using AI in a terminal reduce the “context mess” people get in browser chats?

Browser workflows often fragment work across many chats and require users to manually preserve context (copy/paste into notes, restarting “research mode,” and re-explaining goals). In the terminal approach, the AI can create and load a local context file (e.g., gemini.md). Because each new session in the same directory reads that file, the AI starts with the project’s current instructions and state—so the user doesn’t need 20 scattered chats to keep continuity.

What is the practical role of a local context file like gemini.md?

The transcript shows Gemini creating gemini.md by analyzing the working folder and files. That file then becomes the session’s “memory”: when a new Gemini conversation starts in the same directory, it automatically uses gemini.md as context. As the user makes decisions, they can ask the AI to update gemini.md so future sessions reflect the latest research choices and project status.

How do multi-agents in Cloud Code help manage token limits and bias?

Cloud Code supports creating agents (e.g., a “home lab research expert”) that can be called from the main conversation. Delegating tasks to sub-agents gives each worker a fresh context window (the transcript cites a large context budget for the delegated agent), which prevents the main thread from bloating. It also reduces bias because a fresh agent can review or critique work using a cleaner context rather than inheriting everything from a long back-and-forth.

What does “synchronizing context” across Gemini, Claude, and Codex mean in this workflow?

The transcript describes keeping multiple context files aligned inside the same project directory—Gemini MD, Claude MD, and Agents MD (used by Codex-style tooling). The user runs Claude, Gemini, and Codex in separate terminal sessions but ensures they all read the same up-to-date context files. That enables different AIs to collaborate on the same project without copying and pasting, with each tool contributing in its own role.

How does GitHub fit into the terminal-AI workflow?

An end-of-day “session closer” agent summarizes what happened, updates session summaries and context files, checks whether core project files need changes, and then commits the project to a GitHub repository. Treating scripts and project artifacts like code creates a history of decisions and makes it easier to resume or roll back when something breaks or when the user returns after a break.

What makes Open Code different from the other terminal tools mentioned?

Open Code is described as open source and supports both local models and cloud-based usage via a Cloud Pro subscription (avoiding per-use API keys). The transcript also highlights features like session sharing (a URL copied to clipboard), timeline-style restoration, headless server support, and the ability to switch models mid-workflow using configuration files.

Review Questions

  1. What specific mechanism lets the terminal workflow avoid re-supplying context in every new AI session?
  2. How do multi-agents change the way token usage and critique/review work compared with a single long chat thread?
  3. Why does synchronizing context files across multiple AI tools matter for collaboration without copy/paste?

Key Points

  1. 1

    Terminal AI workflows can eliminate fragmented browser chats by using local, persistent context files that new sessions automatically load.

  2. 2

    Gemini CLI can read and write files on the local machine, enabling the AI to generate both deliverables (e.g., markdown documents) and project context (e.g., gemini.md).

  3. 3

    Cloud Code supports multi-agent delegation, giving sub-agents fresh context windows so the main conversation stays manageable and less biased.

  4. 4

    Keeping Gemini, Claude, and Codex-style context files synchronized in the same project directory allows multiple AIs to collaborate without copying and pasting.

  5. 5

    An end-of-day “session closer” agent can automate summaries, context updates, and GitHub commits to preserve decisions and make resuming work reliable.

  6. 6

    Open Code offers an open-source terminal AI option with support for local models or Cloud Pro subscription usage, plus session sharing and restoration features.

Highlights

A single local file (gemini.md) can act as the project’s “memory,” so new terminal sessions start with the right context automatically.
Multi-agent delegation in Cloud Code lets tasks run in parallel with fresh context windows, preventing token bloat and reducing inherited bias.
Synchronizing Claude, Gemini, and Codex context files inside one folder turns multiple AIs into a coordinated team without copy/paste.
Treating project artifacts like code—summarizing progress and committing to GitHub—creates an audit trail of decisions and makes daily restart painless.
Open Code combines local-model flexibility with Cloud Pro integration and adds session sharing and timeline-style restoration.

Topics

  • Terminal AI
  • Gemini CLI
  • Cloud Code Agents
  • Local Context Files
  • Multi-Agent Workflow

Mentioned