Get AI summaries of any video or article — Sign up free
stop trusting cloud cameras!! (here's what I use instead) thumbnail

stop trusting cloud cameras!! (here's what I use instead)

NetworkChuck·
5 min read

Based on NetworkChuck's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Use Frigate with cameras that support RTSP; RTSP is the ingestion path for live video into the local AI pipeline.

Briefing

Self-hosted Frigate is positioned as a privacy-first alternative to cloud-connected security cameras—because it keeps video processing local and avoids exposing camera feeds to third parties or dark-web resale. The pitch is blunt: if surveillance cameras connect to the cloud, someone could potentially watch and monetize those streams. Frigate counters that risk with an “in-house” setup that runs an open-source AI surveillance stack on your own hardware, adding facial recognition, license plate recognition, object detection, and semantic search—while integrating with Home Assistant for automation.

The practical walkthrough starts with the minimum building blocks: a local server (anything that can run Docker, such as a Raspberry Pi or a spare laptop), IP cameras that support RTSP (Real-Time Streaming Protocol), and a local network connection. The creator demonstrates a “quick and dirty” build on a Raspberry Pi using Docker Compose to run Frigate, then verifies the system is healthy via the Frigate web UI. The key technical requirement is RTSP streaming; once RTSP is enabled on the camera, Frigate can ingest the stream, record events, and run AI detections.

A major implementation detail is using dual RTSP streams when available. The setup distinguishes between a lower-quality “detect” stream (optimized for AI inference) and a higher-quality “record” stream (optimized for evidence and review). Motion-based recording is enabled so the system stores footage only when movement is detected, reducing storage and bandwidth pressure.

As the camera count grows, performance bottlenecks appear—especially on CPU-only setups. With multiple cameras, detector inference speed and CPU usage climb sharply, and the system becomes harder to sustain on a small device. To address this, the walkthrough adds AI accelerators. A Raspberry Pi AI hat (Halo) dramatically reduces CPU load and improves inference speed, while a Google Coral USB AI accelerator offloads AI inference so the main machine can focus on video decoding and streaming. With Coral, the system scales to around 10 cameras, running face detection and other AI features while keeping GPU utilization focused on decoding.

The most consequential part of the experience isn’t the AI—it’s the network. After roughly 12 hours, the Wi‑Fi network degrades: pages stop loading, cameras behave erratically, and the Frigate container eventually needs restarting. Troubleshooting rules out raw bandwidth as the primary cause by checking the unified network controller metrics. Instead, the culprit is “airtime” and retransmissions: a high TX retry rate on one access point indicates packets frequently fail on the first attempt, creating escalating retransmit traffic that fills buffers over time. The creator also notes that Rio Link E one Pro cameras can have RTSP stream degradation after long uptime.

Three fixes stabilize the system: (1) tune camera stream behavior by enabling constant bit rate (CBR/“fluency”) and using TCP instead of UDP, (2) use Go2RTC so multiple devices don’t each open separate camera connections—centralizing stream handling through Frigate, and (3) reboot problematic cameras on a staggered schedule (about once per 24 hours) to avoid RTSP degradation. Finally, adding an extra access point reduces airtime contention by distributing cameras across APs. After these changes, Frigate runs smoothly again, and the system remains local—integrated with Home Assistant—without relying on cloud camera feeds.

Cornell Notes

Frigate is presented as a local, privacy-focused surveillance system that avoids cloud-connected camera feeds by running AI processing on your own hardware. The setup hinges on cameras that support RTSP and a Docker-based installation on a local server (Raspberry Pi, laptop, or similar). For scalability and better performance, the system uses separate RTSP streams for AI detection (low-res, low-FPS) and recording (high-res), and it can offload inference to accelerators like a Halo hat or a Google Coral USB device. When Wi‑Fi becomes unstable after long uptime, the root cause is high TX retry/airtime contention and RTSP degradation on some cameras, not just bandwidth. Stabilizing steps include stream tuning (CBR, TCP), Go2RTC to centralize stream connections, scheduled camera reboots, and adding access points.

Why does RTSP matter for Frigate, and how does it fit into the local AI workflow?

Frigate depends on RTSP to pull live video from IP cameras. RTSP is the streaming protocol that lets Frigate request a continuous video feed from a camera on the local network. Once RTSP is enabled (and the correct RTSP URL/port is known), Frigate can ingest the stream, run object detection and other AI features, and record events based on configured rules like motion.

What’s the purpose of using two RTSP streams (detect vs record) instead of one?

Dual streams let the system balance AI speed and recording quality. The “detect” stream is typically lower quality with fewer frames (e.g., around 5 FPS) so the AI can infer quickly without overwhelming the CPU or accelerator. The “record” stream is higher quality for reviewing events and storing evidence. Frigate can ingest both streams simultaneously and apply detection to the detect feed while recording from the record feed.

How do AI accelerators change the performance picture when scaling beyond a few cameras?

CPU-only inference becomes expensive as camera count rises, driving detector inference speed and CPU usage up. Adding hardware accelerators shifts AI inference off the CPU. The Halo AI hat reduces detector CPU load and improves inference speed on Raspberry Pi setups, while the Google Coral USB accelerator offloads inference to a dedicated TPU-style device. In the Coral setup, the GPU mainly handles video decoding, and Coral handles inference, enabling around 10 cameras with solid inference timing.

What actually breaks after ~12 hours on Wi‑Fi, and how was bandwidth ruled out?

The system degrades after long uptime with symptoms like flickering cameras and stalled web access. Bandwidth was checked via the unified network controller and found to be barely impacted, even though the system had been working for 12+ hours. The real signal was a high TX retry rate on one access point (e.g., ~29% on “Dumble door” vs ~7% on “Hagrid”), meaning many packets failed on the first transmit and required retransmission. That retransmit traffic accumulates over time, filling buffers and eventually making the network feel “dead.”

Which configuration and network changes stabilized the system, and why?

Stability came from three categories: stream tuning, connection architecture, and network capacity. Stream tuning included enabling constant bit rate (CBR/“fluency”) and using TCP instead of UDP for more predictable, reliable delivery. Go2RTC centralized stream handling so multiple clients didn’t each open separate camera connections; everything routes through Frigate. Scheduled camera reboots (staggered across the day) addressed RTSP degradation seen on Rio Link E one Pro cameras. Finally, adding an extra access point reduced airtime contention by distributing cameras across APs.

Review Questions

  1. What prerequisites must a camera meet for Frigate to ingest its video, and what protocol is used?
  2. How do detect and record streams differ in purpose, and how does Frigate use them together?
  3. If Wi‑Fi collapses after many hours, what metrics point to retransmission/airtime issues rather than bandwidth saturation?

Key Points

  1. 1

    Use Frigate with cameras that support RTSP; RTSP is the ingestion path for live video into the local AI pipeline.

  2. 2

    Run Frigate via Docker on a local server (Raspberry Pi, laptop, or similar) and store recordings/config locally using mapped Docker volumes.

  3. 3

    For better performance and scalability, configure separate RTSP streams: a low-FPS detect stream for AI inference and a high-quality record stream for event footage.

  4. 4

    When scaling on Wi‑Fi, watch TX retry rate and airtime contention; high retransmissions can accumulate over time even if bandwidth looks fine.

  5. 5

    Offload AI inference with accelerators like a Halo hat or a Google Coral USB device to keep CPU usage manageable as camera counts rise.

  6. 6

    Stabilize long-running RTSP streams by tuning camera settings (CBR/fluency, TCP) and rebooting cameras on a staggered schedule when RTSP quality degrades.

  7. 7

    Improve stream reliability and reduce connection overhead by using Go2RTC so other devices connect to Frigate rather than opening multiple direct camera streams.

Highlights

Frigate’s privacy advantage comes from keeping AI processing and video handling local—no cloud feed required—while still enabling facial recognition, object detection, and semantic search.
Dual-stream RTSP (detect vs record) is the practical trick that makes AI inference efficient without sacrificing recording quality.
The long-uptime failure mode wasn’t just “too much bandwidth”; high TX retry rates on a specific access point signaled retransmission/airtime saturation.
Go2RTC and scheduled camera reboots were used to address both connection overhead and RTSP degradation on Rio Link E one Pro cameras.
Adding AI accelerators (Halo or Google Coral) is what makes multi-camera deployments feasible without crushing CPU performance.

Topics

  • Local AI Surveillance
  • Frigate Setup
  • RTSP Dual Streams
  • Wi‑Fi Airtime Retries
  • AI Hardware Accelerators

Mentioned

  • RTSP
  • AI
  • CPU
  • GPU
  • NVR
  • FFMPEG
  • ONVIF
  • UDP
  • TCP
  • CBR
  • AP
  • TX
  • TPU
  • YOLO