Get AI summaries of any video or article — Sign up free
The ARM chip race is getting wild… Apple M4 unveiled thumbnail

The ARM chip race is getting wild… Apple M4 unveiled

Fireship·
5 min read

Based on Fireship's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Apple’s M4 is marketed with a 50% CPU speedup over the M2, built-in ray tracing, and a neural engine rated at 38 trillion operations per second for AI workloads.

Briefing

Apple’s newly unveiled M4 chip is positioned as a decisive step in the Arm-based computing race, with claims of major CPU gains and a neural engine built for on-device AI. Apple says the M4 delivers a CPU that’s 50% faster than the M2, includes built-in ray tracing, and can push 38 trillion operations per second through its neural engine—framed as the fastest AI-focused chip in the PC market at the moment. The bigger strategic twist is timing: Apple rolled out M4 at an iPad event rather than its upcoming worldwide developer conference, suggesting the company wants to set the hardware baseline early while software and developer momentum catch up.

That launch lands in the middle of a broader “arms race” for Arm chips across consumer laptops, Windows-on-Arm efforts, and cloud infrastructure. Arm itself is not a chip maker; it licenses an architecture that companies design around and manufacturers build—often at large-scale foundries such as Taiwan Semiconductor. The ecosystem has shifted since Apple’s M1 in 2020, which made Mac laptops feel faster and more efficient, undermining the old assumption that Arm was only good for phones. That change helped drive Microsoft’s Project Volterra to run Windows on Arm, and it also accelerated adoption in data centers, where efficiency matters as much as raw performance.

The competitive landscape is now crowded. Major cloud and platform players—including Amazon, Google, and Microsoft—are producing their own Arm-based data center chips such as AWS Graviton and Google Axion. Meanwhile, Qualcomm is expected to release the Snapdragon X Elite, with performance claims that could surpass the M4, including a higher neural throughput figure (45 trillion operations per second versus Apple’s 38 trillion). The transcript also flags a likely reality check: benchmark numbers in this space can be “cheated” or optimized for specific tests, so the headline comparisons may not translate cleanly to everyday AI workloads.

Apple’s AI strategy appears aimed at running large language models on-device or at the edge. If that works, it would enable low-latency offline AI and keep user conversations—such as chat history with an on-device assistant—private without sending everything to the cloud. But the lead may be temporary. The transcript argues that the next wave of Arm chips, paired with aggressive Windows-on-Arm competition, could quickly narrow Apple’s advantage.

Finally, the race isn’t only about Arm versus Arm. x86 remains deeply entrenched in decades of infrastructure, and the transcript points to Intel’s “Arrow Lake” and AMD’s “Ryzen AI 9” as evidence that x86 is far from dead. Still, the central bet is that whichever platform can best run AI natively—whether on laptops or in the cloud—will shape what consumers buy next. With WWDC rumored to bring another major Mac update, the Arm chip race is set to intensify rather than cool off.

Cornell Notes

Apple’s M4 chip is being marketed as a major leap for Arm-based PCs, combining a claimed 50% CPU speedup over the M2 with built-in ray tracing and a neural engine rated at 38 trillion operations per second for AI. The launch matters because it signals Apple’s push toward running large language models on-device or at the edge, enabling low-latency offline AI and stronger privacy. The competitive pressure is rising: Microsoft’s Windows-on-Arm efforts and cloud Arm chips from Amazon and Google have already validated the efficiency case. Qualcomm’s upcoming Snapdragon X Elite is expected to challenge Apple’s AI throughput claims, though benchmark comparisons may be optimized. x86 isn’t going away—Intel and AMD are still iterating—but the AI-native performance race is increasingly centered on Arm.

Why does Arm’s architecture matter more than any single chip maker’s hardware?

Arm is an architecture licensing business rather than a chip manufacturer. Companies license the instruction set design, build their own chip implementations, and then rely on large foundries (the transcript mentions Taiwan Semiconductor) to manufacture them. That model explains why many players—Apple, Qualcomm, and cloud providers—can all compete in the same Arm ecosystem while still differentiating their chips.

What changed in 2020 that made Arm-based laptops credible for mainstream computing?

Apple’s M1 in 2020 shifted perceptions by delivering laptops that felt faster and more efficient, with less heat. That momentum helped motivate Microsoft’s Project Volterra to run Windows on Arm. It also aligned with data center priorities, where efficiency can translate into lower power and better performance-per-watt.

How is Apple positioning the M4 for AI, and what user benefit is implied?

Apple’s M4 is framed around its neural engine throughput—38 trillion operations per second—and the idea of running large language models on-device or at the edge. The implied payoff is low latency and offline capability, plus privacy benefits because chat messages and interactions could stay on the device rather than being sent to the cloud.

Which upcoming competitor is highlighted as a potential performance threat to the M4?

Qualcomm’s expected Snapdragon X Elite is cited as a chip that could outperform the M4, with a claimed 45 trillion operations per second versus Apple’s 38 trillion. The transcript also warns that these benchmark numbers may be “cheating,” meaning real-world AI performance could differ from marketing metrics.

Does the transcript treat x86 as obsolete?

No. It argues x86 is likely to persist because it has existed for over 45 years and is deeply embedded in infrastructure. It points to Intel’s “Arrow Lake” and AMD’s “Ryzen AI 9” as ongoing efforts to improve power efficiency and AI-related performance, even while Arm dominates the AI-native narrative.

Why are AI benchmark throughput numbers becoming a battleground for hardware companies?

The transcript links AI throughput claims to consumer choice: the device that can run the best AI model natively is positioned as the one people will buy. Since Apple sells hardware, it would be strategically damaging if Windows laptops ran AI better than Apple’s devices, so companies compete aggressively on neural performance metrics.

Review Questions

  1. How does Arm’s licensing model shape competition among Apple, Qualcomm, and cloud providers?
  2. What practical advantages are claimed for running large language models on-device or at the edge?
  3. Why might benchmark throughput numbers (like 38 vs. 45 trillion operations per second) fail to predict real-world AI performance?

Key Points

  1. 1

    Apple’s M4 is marketed with a 50% CPU speedup over the M2, built-in ray tracing, and a neural engine rated at 38 trillion operations per second for AI workloads.

  2. 2

    Apple’s M4 rollout at an iPad event—rather than WWDC—signals an early hardware push while developer and software momentum ramps up later.

  3. 3

    Arm’s architecture is licensed to chip designers and manufactured by large foundries, enabling many companies to compete on the same underlying instruction set.

  4. 4

    Microsoft’s Windows-on-Arm push (Project Volterra) and cloud Arm chips (AWS Graviton, Google Axion) validate Arm’s efficiency beyond smartphones.

  5. 5

    Apple’s AI direction emphasizes on-device or edge execution of large language models to reduce latency and improve privacy by keeping chat history local.

  6. 6

    Qualcomm’s expected Snapdragon X Elite is positioned as a direct challenge, with higher claimed neural throughput (45 trillion operations per second), though benchmark comparisons may be unreliable.

  7. 7

    x86 remains entrenched despite Arm’s momentum, with ongoing updates from Intel and AMD aimed at power efficiency and AI performance.

Highlights

Apple claims the M4’s neural engine can reach 38 trillion operations per second, framing it as the fastest PC AI chip available at the time of launch.
The strategic goal is on-device or edge LLMs—low latency and privacy by keeping chat messages local.
Qualcomm’s Snapdragon X Elite is expected to counter with a higher throughput claim (45 trillion), intensifying the Arm AI race.
Arm’s architecture is licensed rather than manufactured, so competition plays out across chip designers and foundry capacity rather than a single manufacturer’s output.

Topics

Mentioned

  • CPU
  • AI
  • LLM
  • WWDC