Get AI summaries of any video or article — Sign up free
Nvidia does cybersecurity?!?.....and networking? thumbnail

Nvidia does cybersecurity?!?.....and networking?

NetworkChuck·
6 min read

Based on NetworkChuck's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Nvidia positions BlueField DPUs as the hardware acceleration layer that makes software-defined networking and deep packet inspection feasible at 25Gbps and above, where CPU-based approaches become bottlenecks.

Briefing

Nvidia’s push into networking and cybersecurity hinges on a simple bottleneck: software-defined networking and security run on general-purpose CPUs, but CPUs struggle to keep up with the packet inspection, encryption, compression, and storage workloads modern data centers demand—especially at 100Gbps and beyond. The company’s answer is the BlueField data processing unit (DPU), paired with its Doka software platform, which aims to offload those networking and security functions from the CPU while accelerating them in hardware.

Kevin Dealy, Nvidia’s SVP of networking, frames the shift as part of a broader data-center evolution. The “unit of computing” is no longer just CPU-centric; it’s a coordinated system of CPU, GPU, and DPU. GPUs drive AI and accelerated workloads, while DPUs handle data-intensive networking tasks such as packet switching, deep packet inspection, and security functions like IPsec/TLS decryption—work that otherwise consumes large portions of CPU capacity. In Dealy’s examples, a CPU tasked with inspecting every packet for denial-of-service mitigation can be overwhelmed at high bandwidth, and even routine data movement can become CPU-bound. DPUs are positioned as the fix: they can move data at line rate using RDMA, and they can run security and storage operations without relying on the CPU cores.

The performance story is tied to three DPU capabilities: offload, accelerate, and isolate. Offload moves specific tasks away from x86 CPUs; accelerate uses dedicated hardware accelerators rather than embedded ARM processors; isolate separates infrastructure services from application processing domains, improving security and operational flexibility. That isolation is presented as a practical advantage for environments that must run untrusted or internet-facing workloads—such as cloud gaming—because infrastructure can be protected in a separate domain from the application.

Security is where the pitch gets most concrete. Dealy describes how software-defined security—especially deep packet inspection—hits CPU limits quickly. Working with Palo Alto, Nvidia’s BlueField platform and Doka software API reportedly enabled “ITO” offload, letting the DPU inspect traffic and decide what the CPU should handle. The result claimed is roughly nine times better performance, reaching close to 100Gbps while using zero CPU load for the inspection workload. Nvidia also highlights Morpheus, a cybersecurity application that combines DPU-generated telemetry (a “camera” into network activity) with AI analysis to detect suspicious behavior and quarantine threats. The emphasis is less on catching rare, perfectly crafted attacks and more on preventing common operational mistakes—like exposed credentials or sensitive data—by automating detection that humans can’t reliably do across massive log volumes.

Beyond hardware, Nvidia’s Doka is positioned as an open SDK and library layer that lets partners build on DPUs similarly to how Nvidia’s GPU ecosystem uses established development frameworks. Doka is presented as the bridge for networking, storage, and security vendors, plus developers building digital twins and simulation environments (Omniverse) that require fast data exchange and deterministic timing. Dealy also connects the timing problem to real-world streaming and media workflows, including precision packet pacing and synchronization features (Kronos) for large-scale, tightly timed delivery.

Finally, the networking-and-security strategy is pitched as both enterprise and cloud-ready. Hyperscalers adopt first because they monetize CPU cores directly; offloading SDN and security reduces CPU consumption and improves margins. Nvidia then aims to bring the approach to tier-two clouds and enterprises, with partnerships such as VMware and Dell to keep familiar virtualization workflows while gaining DPU acceleration. The overall message: DPUs and Doka are meant to make software-defined networking and security scalable, while AI-driven automation and digital twins extend those capabilities across industries—from data centers to film production and cloud gaming.

Cornell Notes

Nvidia’s networking push centers on the idea that software-defined networking (SDN) and software-defined security break down when they rely on CPUs for heavy packet inspection and encryption at modern bandwidths. The company’s BlueField data processing unit (DPU) offloads those tasks from x86 CPUs, accelerates them with dedicated hardware, and isolates infrastructure services for better security and operational flexibility. Nvidia claims DPUs can handle functions like deep packet inspection and storage/network data movement with zero CPU load, using technologies such as RDMA for line-rate transfers. Doka, Nvidia’s DPU software platform, is positioned as an SDK and library layer so partners can build networking, storage, and security features on top of the accelerated hardware. The approach targets both hyperscalers (CPU-cost pressure) and enterprises, with examples including Palo Alto’s firewall deep inspection offload and AI-driven cybersecurity via Morpheus.

Why does software-defined networking and security struggle when it runs on CPUs?

The core issue is that CPUs are “the wrong horse” for data-center-scale packet processing. SDN and security bring flexibility, but when packet inspection, encryption/decryption (e.g., IPsec/TLS), compression, and storage-related networking tasks run on general-purpose CPU cores, performance collapses at high throughput. Dealy gives a rule-of-thumb: with a 100Gbps link, CPU-based deep inspection can bring CPU capacity to its knees around ~10Gbps of effective inspection performance. At data-center scale, the CPU becomes the bottleneck, which can even undermine the security goal (e.g., DDoS mitigation requiring inspection of every packet).

What exactly does the DPU change in the data path?

The DPU is inserted into a server and acts as a hardware-accelerated engine for networking, security, and storage tasks. Nvidia describes three mechanisms: offload (move tasks off the CPU), accelerate (use dedicated accelerators rather than relying on embedded ARM processors), and isolate (separate infrastructure services from application processing domains). For data movement, Nvidia claims it can transfer files without involving the CPU by using RDMA and achieving line-rate performance (examples cited include 100Gbps and 200Gbps transfers). For networking and security, the DPU can run software-defined networking/security functions with hardware acceleration so the CPU isn’t forced to inspect every packet.

How does Nvidia connect the DPU to real security products and outcomes?

A key example is Nvidia’s work with Palo Alto, a firewall/deep packet inspection vendor. Dealy says Palo Alto used Nvidia’s BlueField platform and Doka software API to offload traffic inspection to the DPU (described as “ITO” offload). The DPU then inspects packet headers/flows and decides which actions should be handled by the CPU versus forwarded/redirected/load-balanced. Nvidia claims this yields about nine times better performance, bringing inspection close to 100Gbps while using zero CPU load for the offloaded inspection workload.

What role does AI play in Nvidia’s cybersecurity story beyond raw packet inspection?

AI is used to automate detection that would be impractical to do manually across massive telemetry and log volumes. Dealy describes Morpheus as combining DPU-generated telemetry (“camera” into network activity) with AI analysis to spot suspicious patterns and quarantine activity. The emphasis is on behavior-based detection—looking for things that resemble malicious behavior rather than relying solely on narrow, pre-defined rules. The goal is to catch both threats and common operational mistakes (like exposed credentials or sensitive data) that often cause breaches during development-to-production transitions.

What is Doka, and why does Nvidia treat it as important for partners and developers?

Doka is Nvidia’s DPU software platform—an open SDK interface plus libraries and reference applications that partners can build on. Nvidia compares it to the GPU ecosystem’s development frameworks (e.g., “code” for GPUs) and argues that Doka lets networking, storage, and security vendors implement accelerated features without rewriting everything at the hardware level. The platform is also positioned as a way to protect partners’ investment: Nvidia plans future BlueField generations (including ones adding GPU capabilities) while keeping Doka-based programs compatible so new accelerations can be adopted over time.

How does Nvidia argue the DPU approach fits both cloud and enterprise environments?

Hyperscalers are described as early adopters because they monetize CPU cores: if SDN/security consumes 30–40% of CPU capacity, those cores can’t be sold to customers. Offloading those functions to DPUs reduces CPU consumption and improves margins. Nvidia then claims the same benefits apply to tier-two clouds and enterprises because modern workloads—AI, edge robotics, video streaming, and applications with strict service-level agreements—require precise packet delivery and hardware acceleration. Nvidia also highlights timing and streaming needs via Kronos (precision timing and packet pacing) for large-scale delivery.

Review Questions

  1. How do offload, accelerate, and isolate work together to address CPU bottlenecks in SDN and security?
  2. What kinds of tasks does Nvidia claim the DPU can perform with zero CPU load, and what technologies are cited to support that?
  3. Why does Nvidia expect hyperscalers to adopt DPUs earlier than many enterprises, according to the CPU-monetization argument?

Key Points

  1. 1

    Nvidia positions BlueField DPUs as the hardware acceleration layer that makes software-defined networking and deep packet inspection feasible at 25Gbps and above, where CPU-based approaches become bottlenecks.

  2. 2

    The CPU bottleneck is tied to data-center-scale inspection needs such as DDoS mitigation, encryption/decryption, and storage/network data movement.

  3. 3

    Nvidia describes DPU benefits as offload (move tasks off x86), accelerate (use dedicated hardware accelerators), and isolate (separate infrastructure services from application domains for security and flexibility).

  4. 4

    A cited security partnership with Palo Alto claims DPU-based traffic inspection (“ITO” offload) delivers roughly nine times better performance and near-100Gbps inspection while using zero CPU load for that inspection workload.

  5. 5

    Nvidia’s Morpheus cybersecurity application combines DPU telemetry with AI to detect suspicious behavior and automate quarantine, focusing on both threats and operational mistakes that expose sensitive data.

  6. 6

    Doka is presented as an open SDK and library layer that enables partners to build accelerated networking, storage, and security capabilities on top of BlueField, with an eye toward forward compatibility across future BlueField generations.

  7. 7

    Nvidia argues adoption economics favor hyperscalers first because reducing CPU consumption frees cores for customer-facing workloads, then expands to enterprises and tier-two clouds with similar SLA and bandwidth pressures.

Highlights

BlueField DPUs are pitched as a fix for SDN/security workloads that overwhelm CPUs at high throughput, enabling deep inspection and security functions without relying on CPU cores.
Nvidia claims line-rate data movement using RDMA, including file transfers at 100Gbps/200Gbps without CPU involvement.
The Palo Alto example ties DPU offload to a measurable outcome: about nine times better firewall deep inspection performance and near-100Gbps throughput with zero CPU load for inspection.
Morpheus uses DPU-generated telemetry plus AI to automate detection and quarantine, targeting both malicious behavior and common credential/data exposure mistakes.
Doka is framed as the developer-facing bridge—an SDK and libraries layer that lets partners build accelerated networking/security/storage features and keep them working across future BlueField generations.

Topics

  • BlueField DPU
  • Data Processing Units
  • Software-Defined Networking
  • Deep Packet Inspection
  • Doka SDK
  • AI Cybersecurity
  • RDMA
  • Packet Pacing
  • Digital Twins

Mentioned