Get AI summaries of any video or article — Sign up free
I replaced my entire tech stack with Postgres... thumbnail

I replaced my entire tech stack with Postgres...

Fireship·
5 min read

Based on Fireship's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

PostgreSQL’s advanced data types (especially JSONB) and extensibility enable many backend capabilities to live inside the database rather than separate services.

Briefing

PostgreSQL can replace a surprising chunk of a typical web “tech stack,” letting developers build full applications with fewer external services by leaning on built-in features and a large extension ecosystem. The core pitch is simple: instead of paying for separate tools for caching, cron jobs, search, analytics, APIs, realtime syncing, and even authentication plumbing, many of those needs can be handled inside PostgreSQL—often with extensions—so the database becomes the central system rather than just a data store.

The case starts with what makes PostgreSQL different from simpler SQL databases: advanced data types (including JSONB, arrays, key-value patterns, binary data, and geometric types) and, crucially, extensibility. That extensibility has produced an ecosystem where developers “mod” the database with new capabilities. The transcript also includes an important caveat—just because PostgreSQL can do something doesn’t mean it should—framing the approach as a tool-selection problem, not a blanket replacement of every specialized service.

From there, the examples move through common “infrastructure” tasks that teams often outsource. For unstructured or semi-structured data, JSONB lets each row carry different shapes while still supporting SQL queries that filter and access fields inside the JSON. For scheduled work, the PG cron extension provides database-native cron jobs, running SQL on a schedule without editing system cron tabs or paying for a separate scheduler.

Caching is treated as a “maybe you don’t need Redis” moment. The transcript proposes a “poor man’s Redis” using unlogged tables as a cache, trading durability for speed by disabling write-ahead logging for that data. It then suggests tuning PostgreSQL to keep cache-like data in shared buffers, using autovacuum to prevent bloat, and optionally combining with PG cron to expire entries.

AI and search are handled with extensions rather than separate vector databases or search engines. PG Vector adds a vector type for embeddings and nearest-neighbor queries using distance metrics like L2. The pgai extension goes further by supporting embedding workflows and loading datasets via SQL. For typo-tolerant search, TS Vector and generalized inverted indexes support ranking and querying with the @@ operator, reducing reliance on services like Algolia or Elasticsearch.

The stack gets more “full” as the transcript adds API and frontend integration. The PG graphql extension turns the database into a GraphQL API directly from SQL-defined resolvers, avoiding extra servers and middleware. For realtime data updates, Electric SQL is positioned as a sync layer between the database and the frontend, reducing the need for websocket-heavy custom code. For authentication and authorization, PG crypto and PG JWT show how to hash passwords, sign JSON Web Tokens, and enforce Row Level Security policies so users can only read and write rows they own.

Finally, the transcript covers analytics and delivery. PG mooncake converts PostgreSQL into a time-series-oriented system with column-store execution (via DuckDB execution) and supports dropping data into cloud storage and visualizing with tools like Grafana. PostgREST then exposes tables as a RESTful JSON API with features like filtering, pagination, and authentication. The endgame is storing HTML/CSS/JavaScript in the database itself—pushing application logic and UI assets closer to the data layer than most stacks typically allow.

Cornell Notes

PostgreSQL can serve as a near end-to-end web platform by combining built-in capabilities (like JSONB and Row Level Security) with extensions for scheduling, caching, AI vectors, search, APIs, realtime sync, authentication, analytics, and even UI asset storage. The transcript’s central idea is that many “separate services” teams add—cron, Redis, vector DBs, search engines, GraphQL servers, REST layers, and analytics pipelines—can often be replaced or reduced by PostgreSQL extensions. Examples include PG cron for scheduled SQL, PG Vector and pgai for embeddings and vector queries, TS Vector for typo-tolerant search, and PostgREST for instant RESTful endpoints. The approach matters because it can cut infrastructure sprawl and cost, while keeping data logic close to the database—though it still requires critical judgment about when a specialized tool is truly necessary.

Why does JSONB matter for replacing parts of a typical backend stack?

JSONB lets each row store semi-structured data with a flexible shape, addressing a common NoSQL argument. In SQL, PostgreSQL can still query inside JSONB: a table can be created with a JSONB column, JSON inserted as raw strings, and then a SELECT with a WHERE clause can filter based on fields inside the JSONB document.

How can PostgreSQL handle scheduled jobs without system cron or external schedulers?

Installing the PG cron extension enables database-native cron jobs. After installing it, a SQL statement can define a job name and a schedule, and then specify the SQL command to run on that schedule—such as daily cleanup or aggregation—without editing Linux cron tabs or relying on a separate paid scheduler.

What’s the proposed “poor man’s Redis” approach using PostgreSQL, and what tradeoff does it make?

The transcript suggests using an unlogged table as a cache. Unlogged tables avoid write-ahead logging for those rows, which improves speed but reduces durability: if the server crashes unexpectedly, committed transactions in that unlogged cache won’t be preserved. The idea is to disable durability features for cache-like data, then optionally tune shared buffers and use PG cron to expire entries via TTL.

How do vector search and AI embedding workflows move into PostgreSQL?

PG Vector adds a vector data type for embeddings and supports nearest-neighbor queries using distance metrics like L2. For more AI workflow inside SQL, the pgai extension is presented as handling vector embeddings and enabling loading datasets and vectorizing them entirely through SQL code, reducing the need for a separate vector database service.

What extensions turn PostgreSQL into an API layer?

Two different API paths are highlighted. PG graphql transforms the database into a GraphQL API so clients can query via GraphQL while resolvers are written in SQL. PostgREST automatically exposes tables as a RESTful JSON API, letting users browse endpoints like /localhost/<table> and get JSON responses with features such as filtering, pagination, and authentication.

How does PostgreSQL support authentication and authorization without external auth services?

The transcript combines PG crypto and PG JWT with Row Level Security. PG crypto can hash passwords and generate/verify cryptographic material so login queries compare a username to a hashed password. PG JWT signs tokens on the server, and Row Level Security policies enforce that queries only return rows owned by the current user—verified by checking the user’s token before executing queries.

Review Questions

  1. Which PostgreSQL features or extensions in the transcript address scheduled execution, and how do they differ from system-level cron?
  2. How do PG Vector/pgai and TS Vector/TS Vector indexing each change what kinds of “search” a PostgreSQL-backed app can support?
  3. What combination of extensions and security features is used to hash passwords, sign JWTs, and restrict data access with Row Level Security?

Key Points

  1. 1

    PostgreSQL’s advanced data types (especially JSONB) and extensibility enable many backend capabilities to live inside the database rather than separate services.

  2. 2

    PG cron provides scheduled execution of SQL directly in PostgreSQL, reducing reliance on external cron tooling.

  3. 3

    A cache-like setup can be approximated with unlogged tables, trading durability for speed and optionally pairing with TTL expiration via PG cron.

  4. 4

    Vector search and AI embedding workflows can be implemented with PG Vector and pgai using SQL-based nearest-neighbor queries and dataset vectorization.

  5. 5

    Typo-tolerant full-text search can be built with TS Vector, generalized inverted indexes, and ranking queries using the @@ operator.

  6. 6

    API layers can be generated from the database using PG graphql (GraphQL) and PostgREST (REST/JSON) without writing a separate server.

  7. 7

    Authentication and authorization can be handled with PG crypto, PG JWT, and Row Level Security policies to enforce per-user row access.

Highlights

PostgreSQL can act as more than a database: extensions can replace cron, caching, vector search, full-text search, API layers, and parts of auth.
Unlogged tables are pitched as a Redis substitute—fast cache behavior by skipping write-ahead logging, with the explicit durability tradeoff.
PG Vector plus pgai bring embedding storage and nearest-neighbor queries into SQL, while TS Vector plus @@ supports ranked typo-tolerant search.
PG graphql and PostgREST can turn tables into GraphQL or REST endpoints with minimal extra infrastructure.
Row Level Security, backed by PG JWT and PG crypto, can enforce that users only see rows they own.

Topics

  • PostgreSQL Extensions
  • JSONB Queries
  • PG cron Scheduling
  • Vector Search
  • Full-Text Search
  • GraphQL and REST
  • Row Level Security
  • AI Embeddings
  • Caching
  • Analytics
  • Realtime Sync

Mentioned

  • Neon
  • DuckDB
  • Firebase
  • Supabase
  • Algolia
  • Elasticsearch
  • Grafana
  • SQL
  • NoSQL
  • JSON
  • JSONB
  • PG
  • AI
  • RAG
  • L2
  • TTL
  • JWT
  • L2 distance
  • TS Vector
  • API
  • SQL tools
  • RLS
  • WAL
  • SQL tools
  • HTML
  • CSS
  • JavaScript