Get AI summaries of any video or article — Sign up free
The greatest unsolved problem in computer science... thumbnail

The greatest unsolved problem in computer science...

Fireship·
5 min read

Based on Fireship's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

P versus NP asks whether efficient verification of solutions implies efficient discovery of solutions.

Briefing

P versus NP is the most famous unsolved problem in computer science because it asks a deceptively simple question: if a proposed solution to a problem can be checked quickly, can the solution also be found quickly? The stakes are enormous. A proof that P equals NP would imply that many tasks currently treated as computationally infeasible—like cracking cryptographic systems—could become efficiently solvable, while also unlocking fast algorithms across science and engineering. A proof that P does not equal NP would instead confirm that some problems have an inherent “no shortcut” barrier, meaning certain kinds of computation can’t be sped up even in principle.

The problem was formally defined in 1971 by Steven Cook in “The Complexity of Theorem Proving Procedures,” but its roots reach earlier. In 1955, mathematician John Nash warned the National Security Agency that breaking well-defined cryptography doesn’t scale linearly with key length; the effort grows exponentially as keys get longer. That observation aligns with the intuition behind P ≠ NP: multiplying two large primes is easy, while reversing the process—factoring the product back into primes—is dramatically harder. Yet despite decades of progress, mathematicians still can’t prove that no faster factoring algorithm exists.

At the heart of the debate is the distinction between P and NP. P (“polynomial time”) describes problems solvable efficiently as inputs grow—sorting a list is a classic example, where work scales roughly with n or n log n. NP (“non-deterministic polynomial time”) describes problems where, once someone hands over a candidate answer, verifying it can be done efficiently. The traveling salesman problem illustrates the gap: finding the shortest route among exponentially many possibilities is brutal, but checking whether a given route is optimal relative to alternatives is comparatively straightforward.

Within NP sits a special class called NP-complete problems, introduced through Cook’s work on SAT, the Boolean satisfiability problem. SAT asks whether there exists a true/false assignment that makes a logical formula evaluate to true. SAT is easy to verify but hard to solve from scratch, and Cook showed it is NP-complete—meaning any efficient solution for one NP-complete problem would translate into efficient solutions for all of them. That mutual reducibility is why a breakthrough on any NP-complete problem would collapse the entire landscape: P would equal NP, and problems once considered intractable would become tractable.

Despite roughly 50 years of attempts, no polynomial-time algorithm has been found for any NP-complete problem, and no proof has settled the question either way. The transcript frames the importance as both practical and philosophical: P = NP would suggest the universe permits shortcuts through hard search, while P ≠ NP would imply deep computational limits are built into reality. The Clay Mathematics Institute adds a concrete incentive—$1 million for a correct proof—though the transcript also notes the unsettling possibility that the answer may already be “known” by some higher system, leaving humans to discover it the hard way.

The segment ends with a sponsor pitch for MongoDB Atlas, positioned as a way to simplify AI application back-end architectures by consolidating source data, vector embeddings, and metadata, and supporting real-time stream processing.

Cornell Notes

P versus NP asks whether problems whose solutions can be verified quickly can also be solved quickly. P problems have efficient algorithms (polynomial time) as input grows, while NP problems allow fast verification but potentially slow discovery. NP-complete problems—starting with SAT—are the hardest in NP in the sense that solving one in polynomial time would yield polynomial-time solutions for all NP-complete problems, implying P = NP. No one has proved either direction, and no polynomial-time algorithm exists for any NP-complete problem despite decades of research. The outcome matters because it would reshape cryptography, optimization, and the broader understanding of computational limits.

What does “P” mean in P versus NP, and how is it different from “NP”?

P (“polynomial time”) refers to problems solvable efficiently as the input size grows—work scales like n or n log n rather than exploding exponentially. NP (“non-deterministic polynomial time”) refers to problems where a proposed solution can be checked efficiently (in polynomial time), even if finding that solution from scratch can be extremely difficult. Sorting is used as a P example; verifying a candidate route in the traveling salesman problem is used to illustrate NP verification versus hard discovery.

Why is factoring used as an intuition for P ≠ NP?

Multiplying two primes is easy: given primes like 7 and 13, the product can be computed quickly. Factoring is the reverse task: given a number like 91, finding the prime factors may require testing possibilities, and for large numbers the search becomes far more expensive. Public-key cryptography (like RSA) relies on this asymmetry—fast multiplication, slow factoring—yet mathematicians still can’t prove that no faster factoring algorithm exists.

How does the traveling salesman problem illustrate the verification-versus-discovery gap?

With n cities, the number of possible routes grows factorially (roughly (n−1)!). Brute force quickly becomes impossible—for 15 cities, the transcript cites about 87 billion possibilities. But if someone provides a candidate route, checking whether it’s the shortest among alternatives is comparatively easy. That contrast—easy verification, hard optimization—maps onto NP versus P intuitions.

What makes SAT the starting point for NP-complete problems?

SAT (Boolean satisfiability) asks whether a logical expression can be made true by assigning each variable either true or false. Verifying a proposed assignment is straightforward, but finding one may require exploring many combinations. Steven Cook showed in 1971 that SAT is NP-complete, meaning any polynomial-time solution for SAT would imply polynomial-time solutions for every NP-complete problem.

Why would solving one NP-complete problem in polynomial time collapse the entire class?

NP-complete problems are mutually reducible in complexity terms: an efficient algorithm for one can be transformed into efficient algorithms for the others. So a polynomial-time breakthrough for any NP-complete problem would imply P = NP, turning many currently “intractable” tasks into efficiently solvable ones.

What are the real-world stakes if P equals NP versus if it doesn’t?

If P = NP, cryptographic systems built on hard search assumptions could become breakable quickly, since many hard problems would gain efficient solution methods. The transcript also notes dramatic knock-on effects—both beneficial (fast solutions across domains) and chaotic (instant cracking of passwords, encryption keys, and crypto wallets). If P ≠ NP, it supports the idea that some problems have unavoidable computational limits, with no general shortcut for search.

Review Questions

  1. How do P and NP differ in terms of what can be done efficiently as input size increases?
  2. Explain why NP-complete problems are considered “worst of the worst” within NP.
  3. What would a polynomial-time algorithm for SAT imply about P versus NP?

Key Points

  1. 1

    P versus NP asks whether efficient verification of solutions implies efficient discovery of solutions.

  2. 2

    P problems have polynomial-time algorithms; NP problems allow polynomial-time verification even if finding solutions is hard.

  3. 3

    Factoring supports the intuition behind cryptography: multiplication is easy, but reversing it is believed to be hard, even though hardness isn’t proven.

  4. 4

    The traveling salesman problem shows the verification-versus-discovery gap: checking a proposed route is easier than finding the optimal one.

  5. 5

    NP-complete problems (starting with SAT) are inter-reducible, so solving one in polynomial time would solve them all.

  6. 6

    A proof that P = NP would imply major changes to cryptography and optimization; a proof that P ≠ NP would confirm deep computational limits.

  7. 7

    The Clay Mathematics Institute offers $1 million for a correct proof of P versus NP.

Highlights

SAT (Boolean satisfiability) was identified as NP-complete by Steven Cook in 1971, making it a gateway to the entire NP-complete class.
If any NP-complete problem gets a polynomial-time solution, all NP-complete problems follow—forcing P = NP.
RSA-style cryptography relies on the practical asymmetry between easy multiplication and hard factoring, even though the hardness can’t be formally proven.
The traveling salesman problem dramatizes why verification can be easy while discovery explodes combinatorially.

Topics

Mentioned

  • MongoDB Atlas
  • Steven Cook
  • John Nash
  • P
  • NP
  • NP-complete
  • SAT
  • CPU