Get AI summaries of any video or article — Sign up free
The Future of AI Coding with Aja Hammerly thumbnail

The Future of AI Coding with Aja Hammerly

Sam Witteveen·
6 min read

Based on Sam Witteveen's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

AI coding works best as an iterative collaboration—define the problem, discuss options, generate incrementally, and review/correct—rather than relying on one-shot prompt outputs.

Briefing

AI coding is moving from flashy “one-shot” demos toward an iterative, pair-programming style workflow—where tools like Firebase Studio treat the model as a collaborator, not an autopilot. Aza Hami, developer relations head for Firebase Studio, frames the shift as practical: real work needs back-and-forth, testing, UI iteration, and deployment setup, not just a single prompt that spits out production-ready code.

Hami describes her own path from skepticism to adoption after an “aha moment” joining the Firebase Studio team early on. The product’s core vision—letting developers use AI as much or as little as they want inside one environment—resonates because it supports multiple workflows. In her experience, AI becomes most useful when it functions like a pairing partner: she describes the problem, discusses architecture options before writing code, iterates feature-by-feature, and reviews or corrects the output. Even stage demos require follow-up work—adding tests, refining UI details, and cleaning up implementation—so the value lies in ongoing collaboration rather than a single successful run.

That collaborative approach also challenges a common criticism that coding with AI is inherently “one prompt and done.” Hami argues that the best results come from multi-step discussions, similar to human pair programming. For larger tasks, she uses prompt structures akin to a PRD (problem definition and end-state expectations). For exploratory work, she prefers iterative “try it out” prompts and asks the model to reason about options—sometimes focusing on explanations and tradeoffs instead of generating code immediately.

A major differentiator for Firebase Studio, in her view, is the jump from local code generation to deployment and setup. Deployment is where many builders get stuck—firewalls, server configuration, and other fiddly cloud details. Firebase Studio aims to guide users through the common path automatically while still acknowledging that humans must validate certain steps. Hami highlights “AI-first” and prototype-based experiences that hide code for makers and designers, while still allowing traditional developers to move into a full IDE and work directly in code with AI assistance.

Designing for both non-coders and professional developers comes down to insulation and choice: prototype-first UI lets people give feedback without seeing much code, and users can switch back and forth between AI-assisted prototyping and code-level editing. Hami emphasizes that the world isn’t neatly split into “AI users” versus “code users.” People adapt based on project type, personal preference, and where they feel fastest.

On prompting, she recommends learning from strong practitioners, experimenting directly (including with logic problems on gemini.google.com), and—crucially—focusing on the desired end result rather than obsessing over the “how.” She also stresses checkpointing: treat AI work like pair programming, save progress regularly, and roll back when the conversation drifts. Firebase Studio’s auto agent mode can apply many changes automatically after laying out a plan, but it still requires human validation for a subset of requests.

Finally, Hami sees the future as flexible access to AI coding—through UI tools, CLIs, and multimodal interactions—rather than one dominant interface. Prompting itself is likened to another layer of programming language, becoming more accessible over time. The biggest excitement comes from what people build when domain experts and non-traditional coders can turn ideas into working apps, then share and iterate—often reducing toil and enabling specialized tools that previously required teams and budgets. The call to action is straightforward: use Firebase Studio, provide feedback through forums, and shape the roadmap with real user needs.

Cornell Notes

AI coding is best understood as an iterative collaboration, not a one-shot prompt trick. Aza Hami describes working with Firebase Studio (and similar tools) like pair programming: define the problem and desired outcome, iterate feature-by-feature, and review/correct the model’s output, including tests and UI refinements. Firebase Studio’s advantage is bridging from code generation to deployment guidance, helping users through common setup friction like cloud configuration and firewall concerns. The product also supports different skill levels via a prototype-first experience for makers and a full IDE for traditional developers, with the option to switch between modes. Effective prompting improves with clearer end goals, specific constraints, regular checkpointing, and learning from better prompt writers.

Why does Hami reject the idea that AI coding is mainly “one-shot prompting”?

She compares AI to a pairing partner in a 100% pairing shop: code emerges from discussion and iteration, not from one person writing everything while the other waits. Even impressive demos require follow-up—tests, UI cleanup, and production readiness work. In practice, she treats AI as a multi-step collaborator: she describes the problem, discusses architecture options before code, iterates features one by one, and corrects or redirects the model when needed.

What makes Firebase Studio particularly helpful beyond generating code?

The deployment and setup step. Hami calls out that cloud deployment and server configuration involve fiddly details (including firewall rules). Firebase Studio guides users down the path of success for common cases, while still requiring human validation for certain steps. She also notes that when AI can walk users through setup, it unlocks building for people who don’t want to wrestle with infrastructure minutiae.

How does Firebase Studio serve both makers/designers and traditional developers?

It uses a prototype-based experience that largely insulates users from code: makers can interact with the UI, circle or highlight elements, and provide feedback without needing to read or write code. Traditional developers can start there to get moving quickly, then dive into the full IDE to edit code directly and use AI inside the code. Users can also choose projects where they import existing code and use little or no AI, depending on what’s fastest for that workflow.

What prompting habits does Hami recommend for better results?

She advises learning from people who are good at prompting by reviewing their prompts and understanding why they asked questions that way. She also recommends experimenting directly (she personally spent time using gemini.google.com to solve logic problems). Two key trends: experienced developers often get stuck on the “how” instead of the end result, and prompts should be explicit about constraints (e.g., required technologies). Finally, she stresses checkpointing—regularly saving progress and rolling back when the model goes off track.

What is auto agent mode, and what does it still require from humans?

Auto agent mode reduces the need to repeatedly click “apply.” After the AI lays out a plan, it can make many changes automatically. However, a handful of requests still require human validation, meaning trust is useful but not unlimited.

How do MCP servers and AI rules files improve AI coding?

Hami says MCP servers help ground the model with project- or environment-specific context, making outputs more reliable than relying on a general model alone. She mentions using Firebase MCP and language/framework MCPs (including a Dart one announced that morning). AI rules files encode preferences and project structure—such as Ruby style rules (including Jackal), customized build commands, and deployment assumptions—so the model writes code consistent with the user’s setup.

Review Questions

  1. When does Hami recommend using a PRD-like prompt structure versus an exploratory “try it out” approach?
  2. What does checkpointing accomplish during AI-assisted coding, and how does Firebase Studio support it?
  3. How does Firebase Studio’s prototype-first experience change the workflow for makers compared with traditional developers?

Key Points

  1. 1

    AI coding works best as an iterative collaboration—define the problem, discuss options, generate incrementally, and review/correct—rather than relying on one-shot prompt outputs.

  2. 2

    Firebase Studio’s value extends past code generation by guiding users through deployment and setup tasks that typically block builders (including cloud configuration friction like firewall rules).

  3. 3

    The product supports multiple skill levels by separating a prototype-first, code-light experience for makers from a full IDE for traditional developers, with the ability to switch modes.

  4. 4

    Prompting quality improves when prompts focus on the desired end result, include clear technology constraints, and are learned through studying strong prompt writers and hands-on experimentation.

  5. 5

    Regular checkpointing is essential for AI workflows: save progress frequently and roll back when the model drifts, similar to pair-programming check-ins.

  6. 6

    Auto agent mode can apply many changes automatically after planning, but human validation remains necessary for certain requests.

  7. 7

    MCP servers plus AI rules files provide grounded, project-specific context that improves reliability across languages and deployment targets.

Highlights

AI should be treated like a pairing partner: multi-step discussion, feature-by-feature iteration, and human review—especially for tests and production readiness.
Firebase Studio’s biggest unlock is helping with deployment and setup, where cloud and server configuration details often stop non-experts.
Prototype-first design lets makers give feedback without seeing code, while developers can switch into a full IDE and still use AI inside the code.
Checkpointing turns AI coding into a controllable workflow: save progress, reset context when needed, and avoid compounding mistakes.
Prompting is evolving into a new “programming layer,” and the most exciting outcomes come from what domain experts and non-traditional coders build.

Topics

Mentioned

  • Firebase Studio
  • gemini.google.com
  • Firebase MCP
  • Gemini CLI
  • Gemini
  • Aza Hami
  • PRD
  • TDD
  • MCP
  • IDE
  • UI
  • CSS
  • JavaScript
  • CLI