I literally connected my brain to GPT-4 with JavaScript
Based on Fireship's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
The Crown EEG device measures brain electrical impulses with electrodes and streams them to a JavaScript SDK for programmatic access.
Briefing
A wearable EEG device called the Crown can turn brain activity into machine-readable signals—and a JavaScript workflow can route those signals into GPT-4 for real-time, thought-triggered outputs. The core move is simple: measure brain waves with tiny electrodes, stream the data through a JavaScript SDK, detect specific mental states or trained thought patterns, and then use those detections to prompt GPT-4 via the OpenAI API.
The Crown sits on the back of the head and uses multiple electrodes to capture electrical impulses from the brain, which show up as brain waves. Those waves shift with cognitive state: delta waves appear during sleep at roughly 2 Hz, alpha waves rise to around 10 Hz when relaxed, and gamma waves climb to about 35 Hz during high focus. Brain activity is also dynamic—patterns change quickly based on mental processes and external stimuli—so the system needs more than raw measurement.
Neurosity, the company behind the Crown, provides a dashboard that can train algorithms to recognize a person’s custom thought patterns. In the walkthrough, the creator trains the system by repeatedly imagining biting into a lemon and then relaxing when prompted. After enough repetitions (described as around 30), the dashboard begins to detect that specific mental pattern; when the thought is active, a chart “goes wild,” and when it isn’t, the chart steadies. The same approach can be used for other gestures or mental cues, such as a right-hand pinch or tongue-based patterns.
On the coding side, the workflow starts with a Node.js project and installs the Neurosity SDK. The program initializes the device using a device ID from the mobile app, logs in with email and password, then subscribes to a stream of raw brainwave data. The stream arrives at a sampling rate of 256 Hz—256 samples per second—batched into groups of 16 samples roughly every 62.5 milliseconds, and split across eight channels. While the raw feed is available in JSON, the more practical path is subscribing to higher-level “states” (like calm or focus) or to trained events.
For thought-triggered control, the key feature is event recognition via Neurosity’s Kinesis interface: after training, the code can listen for a named event such as “left hand pinch.” When that event fires, the system can run side-effect code—most notably, sending a prompt to GPT-4. The OpenAI SDK then authenticates and calls a chat completion endpoint using the gpt-4 model, returning text similar to what users see in ChatGPT.
From there, the output can be converted to speech and transmitted to a Bluetooth earpiece, enabling hands-free responses. The transcript pushes the idea further with speculative use cases: thinking a trained cue to request an excuse for being late, using a cue to get help on a difficult exam question, or triggering image capture through camera-enabled glasses for GPT-4 to interpret and answer. The throughline is that brain-signal recognition plus a straightforward JavaScript-to-OpenAI pipeline can make “intent” act like an input device—turning cognition into a programmable trigger.
Cornell Notes
A wearable EEG device (Neurosity’s Crown) measures brain waves and streams them to a JavaScript SDK, where trained mental patterns can be detected as events. The transcript describes how brain activity shifts across delta, alpha, and gamma ranges, then focuses on training custom cues—like imagining biting a lemon—so the system can recognize when that thought occurs. In code, a Node.js app initializes the device, logs in, subscribes to raw brainwave data (256 Hz, eight channels), and then uses higher-level state/event streams instead of parsing everything manually. When a recognized event fires, the app calls the OpenAI API (gpt-4) to generate text, which can then be converted to voice and played through a Bluetooth earpiece. The practical takeaway is that brain signals can be treated like an input to AI workflows via JavaScript.
How does the Crown turn brain activity into data a computer can use?
What brain-wave frequencies correspond to different mental states mentioned in the transcript?
Why does training matter, and how is it done in the walkthrough?
What are the key characteristics of the raw brainwave stream in the code example?
How does a recognized brain event become a GPT-4 prompt?
What downstream outputs are suggested after GPT-4 generates text?
Review Questions
- What problem does event/state recognition solve compared with processing raw EEG data directly?
- Describe the data-rate and structure of the raw brainwave stream (sampling rate, batching, and channels).
- How does the system connect a trained mental cue to an OpenAI chat completion request?
Key Points
- 1
The Crown EEG device measures brain electrical impulses with electrodes and streams them to a JavaScript SDK for programmatic access.
- 2
Brain-wave frequency bands shift with mental state, with gamma waves cited around 35 Hz during high focus.
- 3
Neurosity’s dashboard training lets the system recognize personal thought patterns by repeating a cue and relaxing on prompt.
- 4
Raw brainwave streaming arrives at 256 Hz, batched into 16-sample chunks about every 62.5 ms, and split across eight channels.
- 5
Using higher-level “states” and trained events is more practical than parsing raw EEG for real-time control.
- 6
A Node.js app can detect a named brain event and then call the OpenAI API (gpt4) to generate text via chat completions.
- 7
Generated text can be converted to speech and delivered through Bluetooth audio, enabling hands-free AI responses.