Code Anything with Perplexity, Here's How
Based on David Ondrej's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Perplexity’s selectable model options and multi-source web browsing are positioned as direct advantages for coding tasks that require current references or fixes.
Briefing
Perplexity is positioned as a faster, more reliable way to build software because it combines multi-agent web search with the ability to switch among different large language models—then feeds that information directly into coding workflows. The practical payoff: when a reference or fix can’t be found quickly with a general chat model, Perplexity can browse and cross-check many sources (the transcript mentions checking 20 or more), which helps reduce dead ends during development.
The walkthrough contrasts Perplexity with ChatGPT by focusing on two differences: model choice and depth of web research. In Perplexity, users can select which model to run (examples mentioned include Claude, Grok, and “gpt-4” style options), while ChatGPT is described as more limited to OpenAI models. More importantly for coding, Perplexity’s web-browsing is framed as deeper and more reference-complete—illustrated with a Bill Gurley example where one model couldn’t locate a specific reference, but Perplexity found it when given the same prompt.
From there, the transcript shifts into a hands-on build: a Chrome extension that tracks time spent on each website. The process starts with upgrading to Perplexity Pro (priced at $20/month in the transcript) to access the “best models,” then selecting a model in settings (the creator favors “Sonnet 3.5”). A key workflow recommendation follows: use two browser tabs. One tab is set to search the web and generate an outline or plan; the other tab is used for writing code without web search, to avoid wasting context.
The coding plan is generated via Perplexity, then executed in a coding environment using Cursor (described as a VS Code fork). The extension is built with a minimal file set—manifest.json plus JavaScript and HTML files (the transcript references creating three files first, then adding popup HTML and background logic). The manifest is configured for a Chrome extension using the latest manifest version, with permissions that allow the extension to store tab-related data and run a background service worker.
Debugging becomes the real story. The extension initially shows tracking numbers in logs, but the popup UI appears blank. The transcript then walks through iterative fixes: reloading the extension, adding console.log statements, and addressing a Chrome content security issue where inline JavaScript in popup HTML is blocked. The fix is to move JavaScript into a separate popup JS file. After that, the UI begins displaying time spent.
Finally, the tracker’s behavior is refined: time should roll up per site rather than splitting across subpages, and the transcript notes that persistence across reloads and correct grouping require additional logic tweaks. By the end, the extension is working well enough to show per-site time totals, and the broader takeaway is that Perplexity’s web search can shorten “error hell” by finding up-to-date fixes and documentation when coding stalls.
Overall, the transcript argues for a workflow where Perplexity handles research and planning (including browsing for fixes), while a dedicated coding assistant (Cursor) handles implementation—yielding faster iteration even when errors occur on the first attempt.
Cornell Notes
Perplexity is presented as a coding accelerator because it combines multi-agent web search with selectable LLMs, letting developers pull in accurate references and fixes during implementation. The transcript demonstrates building a Chrome extension that tracks time spent per website using a two-tab workflow: one tab for web-backed planning and one tab for code generation without search. After initial success in logging tab activity, the popup UI fails to display results due to a Chrome content security restriction on inline JavaScript, which is resolved by moving code into a separate popup JS file. The project then gets refined so time totals behave correctly across reloads and roll up per site rather than per subpage.
Why is Perplexity framed as more useful than a general chat model for coding tasks?
What two-tab workflow is recommended for building the Chrome extension?
What initial debugging pattern appears when the extension tracks time but the popup stays blank?
What specific Chrome issue breaks the popup, and how is it fixed?
What refinement is needed so the tracker groups time correctly?
Review Questions
- What two capabilities of Perplexity are most tied to faster coding iteration in this transcript (and why)?
- Describe the content security issue that prevents the popup from working and the structural change that resolves it.
- How does the recommended two-tab workflow reduce wasted context during extension development?
Key Points
- 1
Perplexity’s selectable model options and multi-source web browsing are positioned as direct advantages for coding tasks that require current references or fixes.
- 2
Use a two-tab workflow: one tab for web-backed planning and one tab for code generation without web search to keep context tight.
- 3
Build the extension with a minimal file set (manifest plus background logic and popup UI), then verify permissions and service worker behavior early.
- 4
When the popup is blank but logs show activity, debug the UI layer separately from the background tracking logic.
- 5
Chrome content security policy can block inline JavaScript in extension popups; moving JS into a separate file is a reliable fix.
- 6
Track time at the correct granularity by aggregating per site rather than per subpage, and validate behavior across reloads.
- 7
When stuck, providing screenshots and precise error details to Perplexity is presented as a fast path out of “error hell.