Get AI summaries of any video or article — Sign up free
How to make vibe coding not suck… thumbnail

How to make vibe coding not suck…

Fireship·
5 min read

Based on Fireship's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

AI-assisted coding often fails in a way that burns credits and time; MCP is presented as a way to make results more reliable by grounding prompts in real tools and data.

Briefing

AI-assisted coding can feel like a dopamine hit when prompts reliably produce working code—but it also turns into a “prompt treadmill of hell” when the model fails, forcing developers to burn credits without getting dependable results. The core fix offered here is to make AI coding more reliable and “quasi deterministic” by wiring coding agents to standardized Model Context Protocol (MCP) servers. Instead of asking a large language model to guess from scratch, MCP servers provide structured access to the right tools, docs, and runtime signals, reducing hallucinations and speeding up high-friction tasks.

MCP servers are framed as a standardized way for a coding agent to communicate with external systems—whether that’s a local app, a remote service that runs code, or a third-party API. With that in place, developers can offload specific responsibilities to purpose-built integrations. One example targets a long-standing pain point: generating correct “spelt 5” code. The “spelt” MCP server lets developers start prompts with a /spelt command inside tools like Claude Code, automatically pulling the correct documentation and using an “autofixer” that performs static analysis to correct hallucinated or invalid ReactJS-like code.

For front-end work, the Figma MCP server tackles the time-consuming conversion of designer files into real UI. It can connect to the desktop or cloud version of Figma, pull a design file, and generate HTML and CSS—plus options like React component generation, Tailwind output, or iOS UI elements using Figma’s more dependable tooling.

When building high-stakes features like payments, the transcript highlights Stripe’s MCP server approach: fetch documentation tied to the exact API version in use and provide tools for accessing live data. The upside is faster, safer integration; the warning is that powerful tools can also enable catastrophic mistakes (the example given is accidentally refunding 10,000 customers with a single prompt).

Even with better generation, runtime failures still happen. That’s where monitoring and issue trackers come in. Sentry integration is presented as a way to query missed errors and have the AI fix them on the fly, rather than manually deciphering “slop” shipped minutes earlier. For edge cases and long-tail bugs, Atlassian and GitHub MCP servers can pull Jira tickets or GitHub issues so an AI can address them and close the loop.

Finally, scaling introduces infrastructure automation. MCP servers for AWS, Cloudflare, and Vercel are described as enabling AI to provision cloud resources—paired with a caution that automation can be financially dangerous if instances aren’t shut down. The transcript also stresses that MCP’s value isn’t limited to third-party servers: the protocol is standardized, and frameworks exist to build custom MCP servers in major programming languages for specialized needs like custom data sources or smart-home control.

To deploy the resulting systems, the sponsor—“Savala”—is pitched as a Heroku successor combining Google Kubernetes Engine with Cloudflare. It’s positioned as a simpler deployment path for full-stack apps, databases, and static sites, with Git-based shipping, templates, environment variables, analytics, and pipelines for preview, staging, and production. The takeaway is clear: AI coding doesn’t have to be gambling—MCP turns it into a tool-using workflow that’s more grounded in real systems, real docs, and real feedback loops.

Cornell Notes

AI coding often feels like gambling: prompts sometimes work, but failures trigger a “prompt treadmill of hell” where developers burn credits without reliable results. Model Context Protocol (MCP) servers aim to fix that by giving coding agents standardized access to external systems—local apps, remote services, or third-party APIs—so the model can use real tools and data instead of guessing. The transcript walks through MCP examples: spelt for correct Svelte 5 code with an autofixer, Figma for turning design files into HTML/CSS (and React/Tailwind/iOS UI), Stripe for version-specific payment docs and live-data tools, Sentry for querying runtime errors, and Atlassian/GitHub for pulling and fixing tickets. It also covers infrastructure MCP servers (AWS/Cloudflare/Vercel) and the option to build custom MCP servers, then discusses deploying them via Savala.

What problem does MCP try to solve in AI-assisted coding, and why does it matter?

MCP targets the unreliability of pure prompt-based generation. When an LLM hallucinates or can’t infer the right details, developers burn time and “Claude credits” without getting working code. MCP matters because it standardizes how a coding agent connects to external systems—docs, tools, live APIs, monitoring, and issue trackers—so outputs are grounded in real context rather than guesses. That shift is presented as making results “quasi deterministic” and reducing the prompt treadmill effect.

How does the spelt MCP server reduce errors in Svelte 5 code generation?

The spelt MCP server is installed into a coding tool like Claude Code and enables a /spelt prompt command. That command automatically provides the correct Svelte 5 documentation context. It also includes a “spelt autofixer” that runs static analysis and “delopifies” the code when the model hallucinates random ReactJS-like code in the project.

What workflow bottleneck does the Figma MCP server address for front-end developers?

It targets the most time-consuming part of front-end implementation: converting designer Figma files into actual code. By connecting to the local desktop app or the cloud version of Figma, it pulls a design file and generates HTML and CSS. The transcript also notes it can generate React components, output Tailwind, or build iOS UI elements using Figma’s more reliable tooling.

Why is Stripe’s MCP server described as both powerful and risky?

Stripe’s MCP server can fetch documentation for the exact API version being used and provides tools to access live Stripe data. That accelerates integration and reduces version mismatch errors. But because it can act on real customer/payment data, the transcript warns that a single prompt could cause severe damage—using the example of accidentally refunding 10,000 customers.

How do Sentry and Jira/GitHub MCP servers change the debugging loop?

Sentry integration lets the AI query issues and errors it missed before deployment, then fix them on the fly instead of manually interpreting newly shipped failures. For longer-lived edge cases, Atlassian (Jira) and GitHub MCP servers can pull the relevant tickets/issues so the AI can address them directly—then close the ticket—reducing the need for developers to read and triage everything themselves.

What does infrastructure MCP automation aim to do, and what financial risk is highlighted?

Infrastructure MCP servers for AWS, Cloudflare, and Vercel are positioned as enabling AI to provision the actual cloud resources needed to scale an app. The transcript emphasizes the theoretical benefit of automation (robots won’t forget shutdown), but immediately flags the risk: if instances aren’t shut down properly, costs can destroy finances.

Review Questions

  1. How does MCP change the balance between “LLM guessing” and “tool-grounded execution” in coding workflows?
  2. Which MCP examples in the transcript directly reduce hallucinations, and which ones primarily improve runtime debugging and operations?
  3. What safeguards or monitoring steps become more important when AI coding is connected to live APIs like Stripe?

Key Points

  1. 1

    AI-assisted coding often fails in a way that burns credits and time; MCP is presented as a way to make results more reliable by grounding prompts in real tools and data.

  2. 2

    Model Context Protocol (MCP) standardizes how coding agents communicate with external systems, including local apps, remote services, and third-party APIs.

  3. 3

    Purpose-built MCP servers can reduce hallucinations by injecting correct documentation and running static analysis-based fixes (e.g., spelt’s autofixer for Svelte 5).

  4. 4

    MCP integrations can automate high-friction developer tasks like converting Figma designs into HTML/CSS and generating UI code variants (React, Tailwind, iOS).

  5. 5

    Connecting AI to live payment systems via MCP (e.g., Stripe) speeds integration but increases the stakes of prompt mistakes.

  6. 6

    Operational feedback loops improve when MCP servers connect agents to monitoring and issue trackers like Sentry, Jira, and GitHub.

  7. 7

    Infrastructure MCP servers can automate cloud provisioning, but cost-control (e.g., shutting down instances) becomes critical.

Highlights

MCP turns AI coding from “prompt-only guessing” into a tool-using workflow by standardizing access to docs, APIs, monitoring, and issue trackers.
The spelt MCP server uses an autofixer with static analysis to correct hallucinated code patterns in Svelte 5 projects.
Stripe MCP integration can fetch version-specific documentation and access live data—powerful enough to make catastrophic mistakes if prompts go wrong.
Sentry MCP enables querying runtime errors and fixing them immediately, reducing manual debugging of recently deployed “slop.”
Infrastructure MCP servers for AWS/Cloudflare/Vercel promise automated scaling but raise real financial risk if resources aren’t shut down.

Topics